var/home/core/zuul-output/0000755000175000017500000000000015155171012014523 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015155204126015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000263745715155203745020305 0ustar corecoreikubelet.log_o[;r)Br'o -n(!9t%Cs7}g/غIs,r.k9Gfͅ2?k "mv?_eGbuu񯷑7+%f?7ݭ7֫X??xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3*ggóisR)N %emOQ/Ϋ_oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿/h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߿)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xjxxnIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aPVL pSROޔ8OFz|&4@2ƭ1-RN%?i¸EIc2qdNA&aLQVD R0*06V۽棬mpʹǣgC^=jt'#=( 9X$=rዌqpMl)QpL F2G rZ5n@Qq9TAQ;mM9>lη˽`!i:ګPSPٔ3@5;ȕ}PkڪH9' |":", 1Ҫ8 %lg&:2JC!Mjܽ#`PJWP4Q2:IGӸۡshN+60#:mufe߿~Y,iǑ wVq*T+ w%fx6 %u̩1hӰc%AYW ZY~a_6_yWf`rVA,f=A}h&VOK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2aޙ-did˥]5]5᪩QJlyIPEQZȰ<'\b$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKw*#zJ9tT :<XK*ɤwoJarExfKB4t@y[6OO6qDfEz]1,ʹB֒H ֱw;SpM8hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{:_d* |a%ĉUHSR0=>u)oQCC;^u'}8H0]+ES,n?UU{ x~ʓOy_>?/>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQXV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdck vz(vb$^Nyo$p[DtUCE9s iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^fmIx nf^Lw>"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5_uM Wi·yT"^'~i6֬:v~m!m|X!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YK?m\+F/7nis6].%R6oNJIx֍ڃf8'q\#*HEbL%1Hc)+g'[Jd f l$MbJ/z/,ZpƏUX\V#ʢ+mlʱ4ƪ(H8\\I~3ZF5+æTqvܞ%~C4o!KG2)D{QU)qEjHh5iAU=F*4s(Vf1(l2n7vP^<#Z ^p񌇣dzf?5-0-zBxye?B^r8ɯ*98/=2,o{<%e+FtJV؟DA0/xi|Yf`E4Sۊ؋ȌO/+"CyY\Ƃ7Bf ,Ǝضx&8`#`}c|?۾c#S$_ ݗר[pZl2FFcwȰ݅"wGBѨL/aJ_ߨ笩Y,Yq=c;N0hg(Lwe,k1Պ%\f*,dJVyt?=ĶwtFQEϬ%g1kER (wMq\kрZ\ #+Ψ͓j7GH;lMU&yĒY(dV](xe>@{X煸CB"ׄ'f(Aye* " Q(D6X- YM VTI@|bdlR/Ĥ/.(aMBDu%c;Su/\ɢ#}" |ymOLg{S1Mߜ;r0m{rK'Ȩ/q+"F,M`W"fIçb!Jq":9᰼q/>8oPᴻ{5 r\.V晄G\f aٷ3ddT uv_(y-Q"| nԅ(0tsބ?QRK~˛Z2+X؟EƁ{$*O7`ă,קMR~PP/#EkX>B+9_H!yjAZڼ;>rt=bh娌 % @A}RMOE]k@8{yv )|ad,Q5) i5WtY Uٮqly!+Q"CRݕ̠ ]Kߨy',D"'oqvY<.KORu24wyb*GAvr6Z@.cN q~/P@'{r.jTO݁)d K.RAvX[QݕCU4҄ 8Sm;.JXҬ' gX"9STV*=_Z[W[W:_=ڔAD,_!kI5Dt`!/_WB|Mv7M.R ڂ-΅ <lGr_9-rmw[\DVٴ ҇،b%l^)4G^O qrsCNn $ɯ7bPI:&͛PG!QtbQ$}7H҈#,)kO8UQ"3DÃ%RisN l!>#͹8<8J \k]Sw_}D+ -M ئ4!y`Vy40TY23 qP,K 0o$QGg!Oio_8ڶFc fW VGʵ!U[y5;=SC2` }#PJq|wZE#`KG!mtVEљ۳/ u:g:b|)=dRSRv>ѵM﫢Sws[O.ޞE: iY@[%~cqmKEUV䒪f8!2+ھCꎯSâ12>CAP9b6'JqaC vok.o 9D_P5Ya-m چqvMͳ$2UWܸj`u ɆkUs-w+\|`rEgm];؊lwt>uF n*:@)`Cq&swE` Hc9S:=/%Uٖz˵gmIPF]{=ʔ-a{;<3[feײ60ˍoQSG[O6!Y^TCIGLqu%Иɤ-!XM{ + %[7MKj.a#L_WTͧF\ύTLm#ͅ+Rvwĥҡ%K + &K w ]Ke5ϑ+v?F(3~ólv,q]1sE\ERR$A#4L2Ӎ@;k[`U.u 1מvy,m[_Wg0iiw66LțJDQb:Y S }2\4I2@غb[Ռop -n< V= a=ڛqx?_S8@UUv*#"];0ў;,GĤĨFFӳ5$zX6~d.=60|~{By~kȝgP{!_ X\FIjn7Cb\Ik`RBf !/[?%0_p=jDb׸Pُٸ$ؓ"|`CM!qp#] f6טIg|4Dk+7s~%>LdtϦ9. ;=#M0Òfie_g2#!c<=T€Co_;ChnxJ\rY<ܘ^Psƽݍ.StA1s֐(j" 6?9Ӱ/Cًa3Tx 3XxC|!DWM(~9  &·vz눣PGiN1cKۧqc5X^ ~|G;]CN'wy]uEʇû, SḊ۪d08"p\c z|~?Nkb9s7:{uz4a5%JgJ{Q&Ň ',e}³IսGGy.x0i+aYe; [`K?"̧_Q6H7nhצ]VARgav,Xw~3ͫw"iLj34fGl I$bYm"6Xbf85 Q7il<&ij(&sD_xk|mT_fkN m bvZ(Xve5I4 XuX:oс0 6ʄ(n;Lk`23#MDhCg}-@*y^d vSfZN Q)soOج \j ƴ|n4( vkq[@3F`roAqZa4(ZuPxO"i i;4Iô}/ly7/﹁( ,qBu,A`x klUG0~;o׶3pkl;g@m[ xֹ(@8~"vM1f/g%piz\f~rK#\gFnQQ>tl`zU& '6۽vsmKPіlR xXVmb? 'MsYN@Zummv:gϰƉzc.b?1zXE%ns>q3 oݹ5gæ#YVƠƙYnggg<^K}9sl'̣m~A4UřoL`xKohy;Λ[({yw4esG#$[Suza- 0v: `ZGk-!MskK p:yՂ I] IqAO nK)!*w5 M9rm XܖsWeA(.W٘ų_.v׉ !NNwo^K•e0|E^$)}c:!]Z l.2Dn{7a^U|Fbԡ8˴܎iu,Աi sdbYjb@YBBRE0,$}QMP Q=0+QTAb0%E.72`1'Tl,Ǫ2| [ [ԛ\X ]Åea&a }GAm]j~k)SbVpa)ܔAz2&,5:bQ+ G\E>Q^NB}>vl<ӂ6 ?ͮkrA[Syq:8Gtp| ~r}_B13 ӷb`shqSv'"x:Zw˳uWq|!>eꆁo.4ԬX^ f4]48{O[k݀ nlRX5]:q+&?MցSg4^ AX7i=M٩TEy? (A‚> 99lDų5saUmd |U5Dci XwӶ^#}r  Mb*7:x"ãJg,hkP,yj{ІWB^!h .&(]n!(^P4A鎂m]jRM>UJjݪewiڧ΍MZZ[km/4Q%^BP{{A j(lQPl{Ae; ʶYBPg{A ( >MPwGA-^Piz; m!(_o!(^P4A-^Pi; t.tNE*Kx )&5rrm:Χ]h==)%ˣK2w&EI"!h]:=DA06;]ʺfY^1V7{K$LXOy ' 0.2!lz'23ZiQw:&yݠa?NfWCY+N+o r:I¦@ 'b޸ȣ&*Iek- < D?qEyR92 r$BWꖤWڨnr|װxPIqɟ<)Ќ!2em:K~¬X7a|&W4Wn $DB=݋H:AJ~0ƞQHeu'O0^-% 45m!Ǐg87AyTd*gmE~,yJRmCI_xk&i`@:!qUfpsg,K"ᱻN. χo{ƛlQPQp{8EoA6ʲLs%dᰃ!pvA_o=qe%·Xo7l@p}RaW?é_PmDrz  ~WQD04;MasJ[(K 'oD A'CeπQxxR5Ih -("5pJXL)"XjxԜCdOٰ-A !=UIQ 33}w6]4qy61÷gqEZ{{|Vi0 W)NW'-Ψ(OR-0 Gbâk)v2ёj`Z8wFu{i!'q8_e}JrFT"[ .*b$1Po~2 A$5Ibq%i9TDlQl(K, ݘf ʴDruʷMs]MR935cAᑶAM7u ؇ (j$>)G!?#\B!*UD:@F j+(0 VpԊA!Ԩby蝢^^JY3XBTtXd'ݡou,ڥ<\ӱٹHC1$ ~!1b^G{@8#KdBDDvQȻdB""TuP =Vшϖk_[pupGLW{^GSXz M)ŞMLR1y AHe|O|A8(#?A0\TEcU4M :_%getoQvcYl ?NARb +pDcݜM~;_`u>@X>مX|'<(Xfk~+7WO_Vf2 '4~:e3`}c~u@؄->J5Rs:2ͼ6OowW|ofT$ɫi댷|׳ogM/V^;?R7AGzY!vjKsϠGSիt_s,WꐕE0}`:eɼESk-g~>%-NHbOqv;'vZޘz oڻo_| s:zTѫƴy s<]XM1xؕ{ h.\DB֚Q4gAX״5H$6}v L)_UJ֒iY(`i+4{@6y>`$3%Wg1k3ZU3]bIȬj0#F{yb(pϱXUU_dY2:טl R#ǗҏQjR 5,9ЧtRQm@N i>:%O. %BxP}}-IlZɢp̪cl`)\D'aBYL}2:ÞkܳaI&=0njOBDžO0X[Vs}W*o!FaY7>F,b$t`o$ M%IޘrWb4q>Z9JxѾAR,҆U=%o-E:8:!\,}ZNJJ4Wzu O"4T++AHbooc'J/-1-Aun RQR%2SFHHN81r!}S%n$z,CFc. #ǻf z>x<0KaXR c[ElK}ސDER?MH 03}˗$s>{ ^ 27YV*ڜVbg0f%l` O\V uf)k"1o [FH_HptqVH1^p{vLG漌)ad6Ǡ#tHCft5K6A_0:B\ %H* !"*K(cL-z :eyc oGŲ 'DNk T񱲇qc9˕|}|b}h*?vI+QW?MS׶|.Q(HoKF)^r.5;z{'Z6NdR/(&)RƉ,i'ReV '<]psGeyd*)OZb B{yH {oPMsO"WmM7v, c%rPї!_btLC{TF5x'$|H/].i: R& qB~f\h|oՊ%Y!qDHߞ$8F/fփ݇gBcX4 bI<* 9%d‚3ei RGS@oG5jF%5mzލ"1%m}rtDCB 1¿s^%x)]W 񈹁V\H8]&q%sas$c<Ȕ11HD_#ҧ}̛*nzma$I9kâ<o~Q.&MDѡfvzp%h- T2$)W [ eF3R, 8}Jl@ipU&$:8xD$| !Cp`$]W.Ϛʘy"6"0yS@$!inyds3)\J4!.Y*Hds c$+fq<$0ef<~{g~ $|Q{(oZEyddDobD4jgJB_ 4&y,vYNoG)GFh 7ԥ'yخc0߷;Gymƚ W!sY4?}8XK){tUTKݮPhXU:ރÉR$ ~L6s0j{/'Mѻ9dKs,Y+ Km1H$1ɘ7wRS\BXRm3ej=7#{IS^Vq׃WZ99xHV+8{Kp¨ID%t5^Hpn|yWTB8#'zp׆T)3Vl(a1c}v6ʗ6EXfV>'~Swiө\y}iK/?|^>Y2:co옞һˁ&y\L~.+g*nBAIfn(ۈswTvyMׂx.6`OYWz tD ^AI5idZ&WIǬL1 xzYV>(DqG`koS^daR[pzWm ދEM)Ǩ)H6X|A>m/$8 R-.c/n?h`."Qݒy$}SKHƊ\D,'޹tF0& {RBeJcֹ\;FFmuF+"@qDk#і'ֿqVe 3Z*I [(%QYWm'c`dXՙqEtb͵f$WS#}r$8vp#f>S3͋o ̘\fsg[%ԠBX\3fB]XSzpjE iuu%}#FҀWpnbQgjr.GntlEz&}㱱d<($v #igHpqm\3:nـv'nθd1<5w`2saVǒje-ȊwA<w/o\ࢿ-,7; G]D|QKoj}/$+J- s[.sZXm>w.`ټ$sVbC) {ή5c=^, ~l TTY>yF"zi ҿ`$]<H:)vpZDi77ŵWfC˜+T!= 7S% N4t|]ײ#%vI L#qk[ιY:LTb$aZ7 MV̲dTFҥ-Na?PtB=qǂT0Ӧ[/DV 2\ERc$CoSa#r!1L+w[s*|Lp,,ƌSӽaLK%E^/_kf{|/wIpTzv.d xba]ˆa>l {T"R!^'ߤWF̵so j.=c[RuDupFъ9 HhhnX NB5&ۂT!bmXґm7U&G2Wjw̘0͑W֕wj!g4EGK%Եs͖0\'ʙ<[T2SKuD@-9k-~IIsi9}%μ7zHM1X2KaᅤteZŷuEt߆/oTi{ BDhVlmH?#K+ MT!@˛56cd8밈P$e}_u&%lEvW~U_Z*NOҪ% +(_:PY-qJ-`J֮U.<̤; QyfUuj*6ȅ[>/tmUn>@D-QO= W"Y+Һ^<,v{~{&z8I3jX5Zͺ2} a⽟LOY?4EN9{CFQf&cNRĔjwbo\"DAy}x8~:o/>@[2$a\Iv |Y 0;ϳ)/9뼃O׌OC`t&N;bwLG>M\-iuf;^^>PNa6 ⷬ6iko:A? -k'! Bg1~ԍ8{Ik;baJ0.`>APĵ^;=$ sLsce]'njuq;ʪs@SRDr8 )f('x8`ؠ7zWI0\<6tx5ffhtW4& ؒ^yz{]7 nUv˛S|Morusq$žC" 2)PZ*rƧIuG`7&i LuǓ?_ˇ3i3޹Lӓ v WT0FH}UߟIuwr0`: :yM0}1Eۧ=xػ+ `b@c~_ތFQ$ ;hzەi?>LO@3g3J>.\Qc(PUcaQ4˅FX;xq)jIʧ bSΓI'Alc_3lAk f=o>\W`z=P Mf^1bՇr<4'E,>QInp3x&ـN4gV/CX^QeYaSͶ~X5ս{$5 yK+"5 "LtLjDP\qAlDm9bxPvQ^.c:GM#o\ĔX@1t{-)ŔeݻnΦC3*A@:rDZU{BʴqC# d-Qlw)TM-U*Wf,XB/Ϲ/s̴F:bP~V?g8hDcsYZbV"lh&Wgft'❔m/o7$ jzU Wǀ@rΏ#.KrKb^K}_]h?㞆:pWoL<+h峇T(ϢQ^_T+01L9͒vm"&rFCl j3qkU!vAm&KdWYIOfKS"Q̑E1Ooop1N@Ki! dtHO=魮y33Q"fKf*fiP)W (<``NV muC&wGTe֞+u{[ (ѓ҅onh3>;G>1"n JNLm$tdسxܒBG sFJs r qG&Y#79,͏<G^- D jc^{>~5TՊ^a+N_ )"|5\qðo^z@CeJPYr纽`Q/=E[Ԣa`Ma 17#?//p7! ഖC{\XǼ{?mXF7/qb]^aƭ$_'PkpX5]POW5#ܵz@߂[;Xr< C!޹(_7%.!ضS_MNU@18lkxP!%6EL/GVu'u\2bSk/-SA8bhKkC6yZj#nCxp8윹_f.cn_v>> 6^UFb6Iܡ˕gvJO=p iS 9EQh;ك]r>^q1t}Ͻ0g~bqpm6:(H(#2C|`FRʚQfvXP L湠S ͵c@3`–`[(Db G^Y3R;JҌ4Cec$(3_zQ.)6gOǯz*nRG%co<4e#裢ۓg-Fi9quwV<%O֝~B´hw-fVComo%o/#;z⠭$WWO ˆ.a inb0!fh\z5Y33@Sg9r4ik =kqd=0TJ.DJ\/R!2Y ,fÞ~b!ǫwFbsU˧Y&Pg `~B-41,vL`s0k&\VieED6!#&VrqUFν_a/o~ᴚsU2>)^t`*.ݫyՉ!+\6z_ׅo7aWkL_L <5HTJ. 7[]a |a;Tgs)%;!UH!ZnD.s'Zsk yDUkN2_x]@`)w ;)4W-PZY?,ef S0{Q^S W $TbjY'm<'#5'@Vc" CR]ڠYAo5k yc҇ u "J*m|3ED1=pFܘh:YSϙ;Y9a:7*5y"!q8 ~ٵi$Bd5+wkbI1a=QF"DBbsR }sR!zQRc-T;C(W,B%LW=`l%fZ{Q #jA>L FnjfōZy xo,vR ᓨmJ : =br--xMnC^&§[_A9`)<׹uQ$TyTR; UD$l}YrQ"QL'm1N0+{lrX&CVz<%a-YHF,IM{K-(- A`Gqw*|NoyK#I+)t9aL3.2\xaKmURm%\gy,.aN4;/-9f:̸e\n44$+0iYjuftNhQ12fĩT(HҼHKƓɕ($Od><:C79aG!=G$*GU2I 0D6ipcs(PNRi(F 4)0T5H}A+3\wi/5B NlZl"`cWFU`au)Rh/?|;ɧbp]_[2޸6\R7*S|e*?}D~V::L*&kF ?a2w&V|3WJG_ {STs'YO%ۋ3rtU?t|~|Z}/z_wv> ʯ:=Y)ktS,:~vobf\$IqIaR[Bf[;OѕqEUi3#YpV6F߁Q%A MlZ墷(YLPI C>wC!w]="C~Qr2WȢ5Or_P440aUouyv߳/?_]Z/J4ˬg#sv'Q_Y=:xEF~)9:8luwz_*?VOUBv|q5>u'5/BS+CFw,dEu/7`Gq?%Cގo#?7QxIPcU{)^8ոy2wWKfnR&li^ysj{H-Эo t9:~YQWM̖d0=i:r2\:~W%xdV৚7{h@pZ4WGǷUu|j2O@NİMG߬YK5e eTc|ԧ`%>fT:Mp/Wm𥍨Z)|NwX}qwr7Vi>P` Np}@F!YUIQFﴴ#o]ᆴK]{Rk'd06JkGqk澙ZX<.!l{{L]y?F,7!.Ue_oQsA=ݵn_TL}?9M9&IZ{^;Vj1P#NXGQ_>aq]_L>ZQqs291>Mq5\)Zdj#?Io1B^}|.ܺN5T\쑵n4iTK w|Q(hkLi[GT.Ɵmg-R(`=xiZƑd3WaV>&֒[cˈ)v\n궸)!y&sD2ge20 IK;f * }QW:'lI(LmD"W@"&|%j)V!!J(0Khm^$NΘ2kmɼfX4,U4\4wv$EA+how9  17& @Z-'$)Z!L`$ZwPȉ 9=\erZ디 N#fSuf68&u$ǻdusJvYy:``0tv+WN#HONOv I2G d^#<倳cͪg_:䛧 ^J gy.ZtbSڬv->(iM)+X!qX~ 8\ֻx1eИH)oC`O 4{ A(hhF A 0Pka!HTİR{;J tE/ DBp}`jL鬊11Y!\Sr8ոBm!Ѕ8;^yx6xXZsuN C '}DcL)@,cw%/8Q*iZ܇~^ԇUZ5{)dEa:eSQ90rq0!ux_U(OQj\y jlٳ&4p`;ZG 7pf1q0}dQN((nwg-QBAx'r}Ҷw>͊9eQ_.IG5C{ZDm.VFT$ >➃ᚸ=(h[u.hnp&okj`\֬ .kн<Б~cd&gAM ıte*Ҭ,g,O4b2It)7\fU׭89F _gAѴdϮ$"HunfNusU|vgjCo:4LL0ל=FцqQ5Ipx~ԃU*2f6" to PwZK-:.*"եKVHpm,|y||(LFPžstaHShqi>\fJܦ IYYI#КԩveZdtiIʳm&Pe۟Zv(GN$īC +;n'1B V%em񷭊 M?P)Z{ܾb%3܄R-wR`:7آv"2G;] rooҵl1׷} ]߾+-I0^9$i lߕL4DYHOs%X)(ΔQL< ڔ %dbe'$>TcWP슀Įk K%A@ Oz=F&zv.4&vbLdk*Z{mSVqxjQ/ismfm}8%\O4P.CE$pAkQmv=u׃$.Um=}%Np֎]lj`qSwbXCgbרͻSD2=!46Е5s^B ;T-!c}N i-!p Q`=M*whG&wd2jX<$k) בMp]+NkG i>jo >}};{Rc(Z $[^T}Hw!t<.= 2YnW0 a; T7@! EXwn(0a[$<-'ZDD:aZ#N!6Qp]gPՍ+}Xx8GE%>E}8ə=/M=oO?_"РCl]oz?wzxhiG4"G4čIh-׻Msҕyw:^Һj}mfIAsG S۶v0hl9i[Կ95oTAc#oњ$5z ץxS$OH.2,*RNct`2rN+ nN@oĻ74ԲQ1ߛlITRdE[!P7sW>4}@}zs z}i0PU)SzzG1sl쯃;IZb~+3<ʺRl;}>z>ԫ611̼7I)Zʂ7M{-f5k#vf>l>}/;Jzeh7'PLHl$Nl4VEdu6&-v _!9p̼0n0Z%^ /[͢>x`g~DNGDžyq9 hYr|y{17N)uBd|s=C\L>$îq~n^9aO˒#ɬzfOݭNVfg*kA銴OiCާK?]"{y`g?:pUL]eRB۬ΧJ~?{괚5&]_cgJꬂƟs7}o(Vrdcw}UȡpJwnm$I~AWq2 oFmi%ݞFQEHURfd0L*lǡtW5..VpQ5FԇkqG%y]H0uApäF +se?(Qr_.竕I0 OIUcv"_O_RӐjI5O&|i[e$S啥؝LK z)FhQz=)#wXE n}jNsK,sL4m5' ;_\oqzeަQC>(-4]o/ø)ahEW\oqzϴUjI,{bwh Êc Wv=/s:4t,H?wK5R-$]/E@ݪR&fHP1gzئQeGKޓ5!𖻩b#)R Jxޓ̖EJBcN"e Xӳ6'ht@+XSHn>} xy=E+MF]v4XvZ\oq'zńhmU4# /ṷ+X4<אN#[L 8,o:"SvA=g$Seuf=mCEokm*c3lծ &釻Y\~ٸǻǏvu׿ tú/}-wWjO?6wW]UHܐ0]RK5IBu':޳p]oi?_u3\"vps,&jgOa}^f-*O{9 eu~_Wf}]5Ww}6^nA*xvϢ]U 5m+rݶk ד-Y:.M]on-kzJ$t IUN\h-Q%R&bd+/8=WS ztZYa)-)ڳTIC)N$`O4QPux׆Ȧjb]I%uQyTO V]N!uh7와 pH4#l ze _N&jƯN5>Vm?'hD˃,H&k8dA!nOh5/a[d?~ 6#ea7SupUsS>;yk'Th(ccIRU-⾍}մհ]WMW[05FLe0HQX05#L E*ڊ)V ?.N=%s6F98V4DJl0(Ș 8^Fƶy5|\2F6 8 9wMrR=5 [TU5$OGLC2m$CgCXr"i !Ityu xh2245v<Af组n pIV *ibl5~WG1p|/h@!/uCPtu!LS㘆È>`'Mdl8€1+JGJᔝFQƱLF B]oʸjW%NNXރrZgd|)Fe)z՗)FsѮۛ1Cٸ^* 55ZT:*4-:=Bc8DB V]=~"E [N<$uULb=bEY5p]H]!:-5 N,L"Q*|6ۙc,n)2L$Ʌ%Ѱ[a,hƮd^Ls%jLatlXB%t paz᳝֛1v=pL]|Coʒ#_n8/}5癭-KF1Ah7$څ ^]E>`ߗG۳-qM>@VIqW AMtKuh7Ԩ~t޼=;oԀ $(@Ht|x3l]&EUSS2)j_lm"d& q4o%`HH 0E\SKjiwS_߰i-)͕fB\p>0s&d(ɪg,W٤)0FcHHHiV H~jX57qX~>wקOCdvqYA@$3$p2uGJ`e*)ٷd^/grh8py1˸@irXZl @X*ͻ+ j)y*͉0;A"z5R9&B v7*e5PPD.eQQr@<-a`*H6 ՝m DNRH*aL |[d+ez\d\c)"*"K,6oG(Id|`4@C'4 ߺ~!X@\UDPʾ[4sD}JVZ2; j$swĪ <+*(4-ZH,-|pJ[i t}Z|@ A|b]}`/̿@[mt>Px3c96ۙ*bY{*Qa# LӧS㳄.BSGi|6c|[JO켤Y ٰ#8ksUIцf 4RFOMiA9)w믭 JK}{=g]/h(܄ @#sef4bƾcsC}(QbJB$hPZTNB@KHf>V:K<0$hMZDҩUBPz1гo&.Wޢmh bXbUz-8 m! Y;j '%c 5ZZE0k}zG| @LcD K&d-FR^؇*T\3Ofc,Em^QX$ƨL) h A*ut4Y);΂,. .!AAdjPkۇ2{X>$8ֿ%\$Ad:H6TirB[|,>mecّԯN>JLAt\::B:XTfO\x蚡~YG)NR&p,` PeDc(sWA5xX@u(P]f5&h9̈;hd (@њzCb<6iH>Ds!OLRZD5v`bU$>!Ɣw&d*w9oʚ/k<W!lfd4 $)h)CI Aa bVLץ`vh.PsBʣ @K$PVV9(z ( 5tߪ~hà&ل %h;0%!kIgs\Uԇ.%^URJ A]W@"E;D"BeO IxN$3Ƀwmpwf]M7p 6ByRZדd2o)[ k/dppi`f/i!'G|3д!orKl&f4Q S\~Rll  誾y[Øf'e2H6@a|Π Pq P6琰9.P_p) !H>&%>c wuӜ,zC cydItl$,m}.jU$b_uP&D; m` [%,I, 2jS ,M=_^81Bv~SȊ(Hd#ikWҭ#R)9%^=i #5KW5EA43c u~ӂ?V7H\}yӻۛffT?ݟoyc¼w?ǻw?@?tοg ?.-AxZ2qcrt]kzǠۏ/MwQZ]ڼXշwls 7!h&Y#ߪ (o3]X:/C}w nCJOLp..Bܦ|gptʍz^[vesz|߿5|]z´mw&7^>w^ƻåӲG?=] mX]px-<~{ll̞Xk9=\<<pΗI~4B.͙s \'F9KG#KY{UZ֩2@\}a ȫ"kT?8O iP~Ddba,͆ѫw,QhX.0XQoK7,r_94'~{Ѓx X@yٰԁR,[o֛/d2oui\C|)"yӹGZ>l}^Fa'ѬOyv8P~oW%zͷZ޶h.gj{ȑ_sK$Lv7l032lv,ĶzI&7~EvKjjJVvzpluaTY,Rm΍)Ŕicc.K|~逑 -l(i_3-I~mwsw0F1曄ΛBj%=l*Ʃ.Ӻs^zgp9HASZF DŽIQkqbEuhVÕ!RgD>s h S#Ӓ̯K>pOE0M9"69r!dkB~rV=Unsߪ'&@ZK߅зfdhJ3LVIUҧJ K)6uS}ƀp馣W@Z_z*f26q2UKsG#^8yv5!v<<?tQOq ݩqk׿~"V7u8$ %VRZ|>Iӿ | jX8#o{%pIýu޻Ke-Ñy;Gj/;Yp.#Y;l;(4ok/|ѯdvVױh$.D\w*B ;cj" Q ]ُ%ȆÂlxAvBZKQ-A6< Ms1yC 1JD'1[bKL 8a{+$qʷXx8^#u CIyw([֬G8>_^˅P %->~%]_-Ն오eG3\ڛ<%ZʣJJ]?GhS,\b&MOwE]?qT)Yr"8>0=d)J|u%-GCu=(H^X YԅJ) Bp:jJd0&ѝΟV1P5^b0T;rzW`SͶq{c Oꢯ|.McU4V+'4gb/3/l3P0_Z`ף^CkwyL S1/RPMNT_ǟW.aNF:pmkߩ?~|(PcCkD=zt?"*~}՛WNF bņsTܸǷ??w^)UP&l%f*[5켿0'w.J+z Uf69D\ 'Y΍#8 .WbPϽQ?B|P=DQs`9 .3ssafLY/4R[y"F+((G'dW7I|Z}ʶ|$0ip|0)h%Uv4C4Gj-!U<|([ qiS~ћ~_ueKclrk%8rbhǿ9~3_Edf=KI^h]n\پ(%cdavK<w:u 0Іatk.1Ig[w=f.W\Q=G3_lM}NS gIDv\$B3ҶKXv]4p [(M~t0Z"ݖe/QݖǸe/u-L \[Ӓ%/ZrfD"\R2qT\[~<{ssso9t7/ʩwf)qFrYlx&(LJ4 3iqR Lfp m!%ܪ}T *vqHP 7~UH׌v-<xw4( Zl ͻݮњu'(6M ) s iՓ1OŐ"Z`q6RQzGJ #pk9" Sdmz &s"vO  Hby JB-T<*/n?޳۝{&wq_611R?qin${KW .3eac g.,πSm%W*# e T}X ֐08}抂QBL^9&S $ňhrE?U-)0֦u> mw%,=W#sg{|C4@G(yB.VɌW'3(=PZGM ]ROLPX깫 b աrL9}x>@mw^94LNŦ5Sd6B[wAunG\ ,5;J8=0> qḊ)?> d|IEj:#h9_ \B S@)09ѭ{JLۮv '&VzhxGZB.Ч I)|L@t* S7xH݀V>pTٕJPMl*?dK-MhU}|) &hCf/M>?0J4 AaAk#*t6}X?orVA=+-Ŵ7fef0[D6s&fz;2x7Y,4H>Awr^|W2g/c9LFjz8r j6 ގ&eaꌰ3QVS勺pl~JyFa猣:@ӆQ1Rl?'1ιȀ#5:9-muvg,ڿ֖Y (ƙJqnۙJ, :Spv2"OV#_k4X~|zNΠpe(x8 TXi\1ѼevFSu0=[R|C)DxBă%QOx\d''ֆ7ُq()jOXPESH5S<~[VV-1d{ext63og`PE8x`(y[n|"P y)J8eEhpU\(~{ϫ+ocXxW{eh>A:O 7чy将e!| A>P{_0L\B (_ ?Kp.TsWSԻFd޽{W)Ev[^b& 7*Ή_}GPF%.{kheY7k`cXKk/1Czmu{Հ[\imGtIܽ5= UqSh^~"-Vuʖ#RQ"K >@,;T$ôxlOˢyrOg.Љbt sdd.?5_jMlcZY? p)~_K˄<όB; UԐ[AsQ.df RV%*H q@-ýɗ|vUa3ƓM]VRG d ` y&3A-Έa\-$uAy lƩ@r)"mL0l s Eriݡ?Ǿz`laA/@<$G jrPYGyKa-HFUF6[a;цi.z4љ{ agʹ UP(rjFdReLD` TEv Tt(py@BJ9+GBgI r )16 ti\ ؍c6Y' 2NJ"У;צpD\h4Eeds>d+Th0#f9Uv! sgr,YV`W!MZgcG`ЪpP+m~F/zPAZDGiDb}m':D+meNp*$kgȒ&B, 'xf'J94͝hU*ⳂK[g}GS,ͱIFBJ$2BVc_0jC3T(]k7Xj=pjc:~FߢÑ;pBk0L` }8'п!  @x{&LfB' 7$;xƒE΁$WVS2Dza%-l]nɑB؇¤23""``` 60ZH6Ad=YͻTdSlo< Ů:Yy2ȕH%gaE2 V`v:ZfB ǪQ7W*KI!oDڨ/>q*X/l#R,ܢ@h+=N.G[K*h`6yLA-ENp`š❂E `[PqMTI"eiyM|p>3]hqT/>"$/u$`#fP܆`m@]ůɂNU~d}*M U!l+ɒ3.a$ S``M1~v?n_<;<|ڷ%}}6Fx4a=R߃&z:EjC$*rPK ||aZ& u ((B l0cN$\hD;QJuX`ut +*dWX)g xbdhk5~ G ЃH,Uc'7tv$@fJdAP?A. jأvGycQSPb#`]Y씆ytSQ|L0]5,j )TNPJ#`u!b8k4Q9TLF#42˱o B)MKVTEKU6 20N JX R y+y=Kp0_ IEzjh6̓uxd37 a;Y<9^^M̵2BpB` 0 Mw'p(`6R:`M> UN4awmZ(ΣFjсaKDm2eP .OC·)-Ps8(A/k Đ 9 ^Qnxh-1;юvr9VSFAt0X'&,-zD{`͂zG f^ K6 BL7`E^1H"Vs:PfՀU;[1 RN'y˰Ǫ@Q {0E5FHTYPGb$dn1d>9XK\SctjɅj pNrR 19KI옔C%I1)g$!P];N #k\F9/Wk]!X;.:DJ#cRY[ݮMzpZf)f8zCʇ8"dCUd$S-aKXƂT!jf׫2k\ޟm(f*O 7{xrWtǫVm<Ġ_nO=iϫ5a>|G o9З>겟:?ݵRje}P X:}j#>1/mP^Cli4:!zڝPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPguPgu&rIBr:V^^#nñ! u!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!BW,$k$#:@~u~B^hB!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!B!y5B/W}r39xY_ qZ^}ZrY/쓺"iC=z]NN2l ņYXߜ Tg3XV`9agf)-̸n i=;uޖy La\ 2QuDVf~ZՑ;$W\}rwN.?Ƌ_i/'GL{`wC0ƈ1 Kž>˦__X1bRZcAi!`^-z`X-&%,!Tz!`Wa!`RFV2yy{"^XO%Br~`Ö`Ñ"o /`e{}GM K7#ۃ_9n/L3,*0KYb |Wم 뼁:R~3jF}ֈa)#K҅l ˢ\0>v)#k8/ A-v6#%6x^XeF|ी% 酀lbKjB:kZx85[5}"r5,,Y/" ;m[X봘W`uQG,=ͷBgjB {b> -Y˼: w|?SW͏TFP21%╋KpȪQ;dU%ʃj|U~\(4& \r\~<>?Xӛz?;4aCY9_/qߎg:Ϙ?wfXuoaR=~C.Ϸ_'Qq.Y){OoL'}3w_^nƏ?uōލ(j4ϑȈoU[O%Jom LMVg+m]R 6gL_UњuV*fK[rNLq,'DG%z^/l5%`RL}%UBrCEMň) ͩbc?Ѳ]z#TNkq#KaW*0ջCqⸯnWI%%f`mǨ6n1X+,ov<%J%a\mVtl)A;gbh=@\CΫOgTsԚH.ɐuMP)4TAXj!'aZsnaKOA(Vx5c6 SMk%M-a9sVՇSʺl rE5P&RSRmn zrŃW Қw-";{Qn+\BbOi[OǺ>?޿e坏Y,;QuH(RSm%"ػ6W#TA$lv0| S"Rg9IQ-J6qC6OUݺ:uC17B4Ī!R~xh3ê1prY}zs_ >l乮u<+^8:@t~̗QmGGjV_b&&I˜dCsTRB|R]Jقdϲ . +9Ų '2S ^Hgx Ug!Hxh5"f QUfy!ёCH%G^=l8@GfZ Nz-(] &W$Z܃&a1)3v1-/5(Xva ,9Pid"e+f+C,hX?@=ɋ ":t`2Q#&P܊`m@mGEVgeSQ_<9Q(55nHA/7C|̈ȇxy@1CK6ٹF=pmuDt A(<"!U@%NOp'r5t=`5a%ÚI^!|^/W`E\"\x>W' WN#lg +F.TBJr RŢ4(JF|RL ?Y_:$Xx t.12aQ0'#r pNiu)fndj?S79֠Qr`qQI<(2SA"0ՔqOp2q~5(Ws Z>@ Mg TfQWq Z#ӆ D@(te:![V2#С 5 у iF8|oh9͠@ɌS:(d %.I%n% E~V;͠6rIWŪR "&l٥#plR.BFCdﺐbɡ:.H56!ׅ5ޮHX;gL0P5a7[h#y]ōjfdH~>NU:Li\ADHN/~uCȲ3!);t}/{?6gG|qy=_S"ž-LlF;?vx7Y7l馹Wg1o7zmnrbޮV˗\3?i>{+m+\\~lIwS@\ɧ׾V~Rf|DBlh:ױuO-!;5w$ԡ- uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP uP ua NBi8k55Ъ/ʝZqM uhu7N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N4N<[f1 u4E_.^J7(1Z8Մ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Mӄ:Ml:_|WJ<݊&/~NPjc}wIJx`h A^cQMYWoˤj ^-.&)lK}k$`sƝ>>cH{ XcY6v$`ƊU v,6=DZmd[OZ00{+4E!/"&Bz[5w{~Zj5wn/HuѽYO.tWnт>\N1eө}9jY_(k~1(ۗ Z_Gt'?d\Xn8 x?h3O &X؁+}R@持j7{d#+ڌ,v$`ui$` L*? bl,f֞1͉m` -7(7[tXIffw='닋p{$h NЭ̨x].=d=y̩ӱ*f#AX܄LL0ڋ$b <Ĭ֝ѦO1KH %S#+m X ;jn3/r r0;2<>m3&,xSf$`4r#n,=I>5lGۏu9aO}p^8-O8KK!vգrISGрU@bZJ#HIGɍ,iϙ X'|$`4L6(3j$`ڍ,Rb$`t #3ƌ,Ȭl$`"Hl,f@ǽ ę88c1P#˕bP@+FV*-X*e؉3je,70:3o`F?Nykfi<QϨ&:͵S^k"Z]\#}PP殾weh㖰ni*w_TrՋ#ƕ3#o$ҍ7Zmo$7Cj~Zuz|ư׭;s :+x ѫaO}c_L_q^G(|\M31ֿ(7TrFuY=.7e}^;w~ڽ;}sZǾ¬\j6otžOhmŨ.rkwNGjRwo;oG;t#d9.>f!0kZ/aꮅ̘LżqsLα_A7c%@8ݞIX0ez5s CQdnzyYܸr7?:G?NJ;|ȵΡ$Ǎk#UFvwVcWE,wj%DSm|if1!FqL+v_O )#|mTt-vک|u1twzk;r!-C?MQS8KWw7iy_jl  _Jy?EY]!u(bg-k`/X֯}\E_+|v|#7u5C==Z[/=B/>;x|~;c뭻'9K'MMՆ b.sE_*N6x4o+HKJ}pS.:dXKw?-f=+JOo"鯋_^S9nf}½/|=v n(?ЧIZr`eA[XKk$Ӌ&=-!H8OL<*0eN*Ugב\D.nӃ)It1ǽ|*.m:O2_fw_ߖ+-gϬzwޖy[m2w(J'Joϑ<8un0~JvqTj{ makKx炛ϖɊ,r]vI]?jӘZ _\A>]~ ۛ|PnI.Ջ?i_uWW7-NWK:.<9red=|nt;)n}lR/uS,U*twe͍H8i;Qޝ藝~ډ ʲe+i,ї*I&ėD_NZՌ7`@yxׅ+ W\Zk PΈ ۏ6s)~lxxhk4vy iz1 &3NO@}ԶV:"Hw@<SX㇩.]#A8hu~ڍaf;@(ۢZ@hB#F 4 %]'3(!E|GB6f'F=>XXfóCC"s&NX`"3F+.NggInfY柟K> Y9:RVmuG$vqNG6&ciL1qD&U 5Tς{#՚!nC:ptc͈~ |v<>.>ė||Ƅ 1a=36pQ6x O'O9*:`Go *X[v30ڻ$/}'Hv~qXӦ֚!$kd]7~OUi)ADU6lW?d8v\NOWk.^Rb!xp-o [?4,nT/,ԇiQFZt#et?Rg"Fy/|o=yQ9h҃*l TnPV]pFvʒe()cuBMNՐkwU5cznk0.>!BfD)DCĈ`*&PKnfG#ۏc},@]B7 }0 "WE:-Eb\ج;BFUJyФsN(ӟCLac=f1JIQr4sd'&'&PY;]C8t #kM`Dz40]y%T9wd#HX co<&DUl8?r\9 Y.sG!vBqe˜ +2 Z`NA>QȻ\f9fr'Y\@B1R%t<岖5 5ة£ryGSyN_T /RpjkI73 5{u/DUpVy-:O!A}aq12yQty7 SݪȻ gou  6(9\hkP|q3*@ß!i ʾpVqy:ЭA;Ètlv/jToO\muӇLe&@66MϢB\P.hL2c @?ȷw[' N2*?"%SD:ʟLb@^;!Ԫt y[yW2m[a%4Q0qukD&򌧱 Ri wZs5ȻV:7 !Bi bjCyG<\xev8ojQpS?[q?[-h>̪{:(Y*FsDZE(:A%xyj (1c*7 շDyW+[=ܲ(2yJŌΘx愔Q׭<JyGS<xr dF)-/9(1Ucٝia*:`ݜ yfyW5sV>ZM/!ayl2 1ȩXhC Re[ vP5!!T@[4 U_ua ˨4CC~"m=\f#^|%{!k;ppejy'3t1fZ"cbtدEpљcRPF.21)%M5c<wU=Æ Uae 1Ѭ5 ZZ-mv[ȓh? 'NB ǎ ydyG%37V-[䲒ŖgD=nyސCg f>\IMuAF'\m\ͧp6n"}`8dG#FȺ*H8e ,o:bwվn]nQr`[rQ9IN(ЊdW$znA7Zx|e3\qO/-^y]ʱ&=gF./h c QOxwU/º\<z4Wgn3yI!R` gHa#9k'Yd2Аyn&@]KeM*0("ΔBxRpo^Vuj| ๺w"Z>X{}8Edd d+楊#S*̄ 45y[y~DUfo4mb D[2y 2*O+5leƏ ~!rEh@o\+V !c u7(Z=u Lޕ Y݌ Fܰ x҉4<Ƃ,clDU}XۻyLP)uX-,2dUϽ6 Cd,G2 =\?$\89$/z0[ e39 UdJ V% A 2 zpDU?hL'+xDa]-4ӪQd >'337`Fgʉ:cE=%ﳤֳA]+\Ҡ i[.//XcL{`*>1.S0׼,; Rg\p){mi3/Pc)qMBP4T#C`JywBwUV}'Cd}/]xkQ7Nq o2?Oa:Et@oT !}㔪/&N\O.dMp7"1)v8pWu ٓc#+p5^9-%j?Ȼw8Bͻb)kCWRmRp^s+DU7IiMB}i|O' Yua H$ZmCy[rrζ.0K{5,HϗkX5|Pf8ejtӲAF-ÅD_mlRKjېh#ǕX(͹L(f] 2sK +iPz+cxpyWkNJ%FH䈕H4h4(\]Ygm\Z5Qnywȿ4l m,@Ufja+3 7!|B .dZg"a OKmeUQW92%sk\/0AOJwuEp-:wZJQ P ^+k;IAͪ'زZ*\hGZqu:"m*JX$CfI=DU m WRԱAFNZ#5Q%{$&mx V崩:Ԓ h&qVv nqi SiWcU~gfWˇpyNMcf~3 FgWkf>}6O\muӇLm YҞSWq՜լ(\`2&5 G21!XcOKvTN`DiUniDNhBw'eIQ_nV%uz~ Dܴ~ο[e ƕnD⫺FTI&%\ug@xzYހGtCQ#y2GRX97"#ލ(@B#y3wkƱ<ێ"D~AMxo^?c:bRu|/ʍ,qPo"o? bܼX ܸM9VgCoa^WmT|29۹eni0v&ֱDN_V4Ro ՓjP6.׍ B 5vƎ{wޟoԻ{ZǑ_QɎ L='NHJ*ZRUOQY)%VTMBD>vOc3j^Aihw:̴Q6 ?_&׃0E}IٙN1DŽP b( ư@,. )N1#tr4Q3B2õ{ FJ1mI Gf {Uf}7sMi'x yҎk\l-y6JN7mbط0Z \c (s X=^EF*8h܂]>vdZރFpqqțOnƜm}\{A22]բJDRst}Dz4t5yf)}ަ~^mv1 g6"wH$;``@ I{>?-kpɡ+6<8>ag zgfrO1M: P߂0BBa!Dh{V=b!!NƞpOXOja^IWZK8"w}-^o[2UYR,aeNyH-8w9"s ?8~ʺH:%l-O^NY A(*T̐O5|B5 w6}m<4uI0{kLrw ]\ ~zh!iv@Ղ.2~0,+~Ӥ%@Tھa]gwc"<nUzp;G bbt*.JC.cen7-I(u2ǔ.YY莶'5#$`6$%gl5rUޤA= [0F{\oݔd8!92nr#gE4rA>m? ]]E'5 Oڹx:)DŽ+x 6A?/Np@Y {dgcUnk>_0XxN; ե3!]b AnEe8 o//;0 ^ARWDQ^B($Cݤrߍ= EbnK58Okj- etT2TGcy|m C'^~nG{ Y RhŃ\ΐWbi5҄C$4ưgʔBQ' dd M.bet|yR̚h56*zHT:g q/2O~2L![BOQHMVZ<4H ܨBtWN š&`ܦ&OT K:?\ZVd p^T!?B<\'<Am4zH4pG| (J< 5.WnMB'=y/>g0ޜ/L??-zcEȤbiPM٩Z^r{B "z 4FQϬCoZcϚKg]N, NxቦV$/+->*2sOWKk`A{Ch-Jb:Í]gԽ}|fDa }"j Wد|G]&!r["AHgՁ_wx~`M=12>)1\ps!'{H|(3_n~͂-<=&?lӉS\ 9Yx^t5s%WoWoͤ.ڽ,!ɲkRth#o;|OM'1ʰ4j"iPev͚iE,+WdZtsTE\~^7()/NiM v+䩩\~;j l tq~K>h_C%ZOcj!MH V)L:-CŅJyiZ`Gq4!PE!QYRϙQ<#svE\O T 1Y*n gx;ڰZm74 X!ROa*#LVj1{:qD~"sʿgo,#3RX!x6[ti] - IYɉ^I]z/>@PטPluK+҃,YBd11*D]=<]~MM|=1$*=X\'$=}vC+,!}T{P6y]>5G QB_Ӆ QʚBbA+,/;j^Ec:*,7i'VyO!&f9: gB3.|vf ėR.]緄!A|9 voJWRyF$X%( pcc?`4( (J8TM,=1 '"PX!Ic1nҏU󦃴F!n#q4՘=Dk< ƣ\rF+D P!JyAdуіx'ܡE} ـfM}k PYOc'(X4De_w% ?l ؆inN^ ʌ*(wv (}uG&*hG(*TI+JwURXNK!JmyS5:#*zHTcU"rFKs1t "]|#ΊhMqRޔiC'TBL]CF!x6\ A5U08^ a^ӌѵ"NM,2 ML#)1s oC *}cAk(2m T2Ϋ I/d:d2M943EIJ 6y^t,ޒ#_yGoK A\qqA{/#QT5UdAiBQtTImRLUGCejL}0IB v#GL6!M/hy8gx2 4/<3GnKRSn>C%ZOc r8 n'8 mH'aI9ɱrgŌ-ٜCB'x7?UKR3V%ҩb*itBUBmN ēҢnMo#j p N&<%cS4[Lzx4,A:o6[:u->ፎ]\eQz,4!_fvN:?nOS|M?l_};>ߤʤ-_EŒK6\XFK6 dR3˰;iV"?u5[d_2ߦrv|1KNr"ͪȟE fg篳?ٟVS,("{LQIrs֢zV68t)[οVA/W++aS~ձ_fz׳e>()P9i}6_ζqoUSN<{y頢 -BjDPO~n]nMw]71FR(艃̐A81'.D"]H$kiE,+URTe%9t n?ўbnc1Ā~^]b9ta>v55:U4F9v[TE)h 1k >dsiSIind"ίHZpB 8rX3X2C9Ǽڒ4寫c۳Oݬ/. lT q\>mdtrF0NN>ֱAA?qȿb; -0{{d.A@I~ڶ%[:v0c,dȏ%=oBf4[OO0jNmpf[䰯*)3G o4oW͟ɭV(-2 $,֥)g%ҠN o-6 , R(NM13 'Keڥ9ijc6cY$0!A4@H-xJU-reQ{?AiBeCxyI֌([泞EM*P"` dW~EtALiS&sr PY%زș8oE]EMJ!]ikBJL:Up*%?3;J,?Z[pؚn MVe13X=o?8V_n>OoۑsHjp<5xOIds`V ȑCon%*s%S:j7Zxz"o(%z?OBo;yP4[TL #0rb, շF`G"I><0FzP" Wi| 8A]ӇŊN7f5fnaFMDJo/]:xMz9loBmƋBsBs,³U ɱC2?=-X#nlד}'*a4D 43Y g9m`, C#;RF@v9@\- Bց'c,.-0#G>"ڛQ[0b2XQ,K(Wu$%Z` C oBy#ҤR UF`P݃TdNI{%J):<0:,r`R|b© “74"zjVQ:q=a(qxb~_,#0rz=NGXu3U<&tR"lxӴֈD* 6" LKp33ΈK,\k<%er#F_yxZv+(F`H5U7)(1AGz(tY1VꩋޕJB n} $g3dlaFNqd:J->Hd8C:N%g+%$iIcH&"Lj:GńJK6LCo?&}}ԁmBVEE@$:SFb e\[ cDBTKDT%2z%*`-fJ_t#lS%(/xR8œˈ0̿TRHg2[3@7THE&ueӺ%Y>Ndsе0<d:&hͪgI"-, C8 H8I,Ue\wixcZ`FN:\:Fd/E}g!j92vQ*ь܇ ,\lGS-~0Ic@羃FZ`FN%uUů\-u/k4b V/fs<-@h,\3g([z9g,=-d'S闔V+7d7 RIL;rD[`F6NL1=|N߼TU!D>Ttb{+A)9_˸xd[`FMfv%U|7i{\|PvF`PR͖eqx=KHSbrT+!0nz(|y9d5sT:)IMV6TqoJ/]R{-0#tWKJщfi=E䩵%fTKf<Շ$=i`0#"ڡSӃo7k퇥8*@~C2R2ɲ\L\5\\Qi'5i~^̭G٤-*%+p@.F٧ߙ{M$-7xwp鿎dqtmu_fwpHR /ICŃ:LR3Gիg,oPH$CJ^_sfvU6H+X,BbFvccLtA <>4eT)c|)hMy3zKD W jʛe ) ^ qJʣa35kv6HG劖@(%?Hr%͢{w(R7_ r2r8}gǙDF_dyg&F JCM2g]IO5T =j}^F߻WNNalrRTqbż,;/޵ }~|nTɾ7ztp nѨƩ<-6m?tw5+p[.cj]@֋lUJ%!HZNˏ|;z `f]_-ie8FO b$O |;NNa59)t=r[4@|f%9FƓqmvaW[cݓ55jzZ&<;TI_,kaBdRQǯ+yO/YC&ZHdT9'r `KI =%QcV[V,ÓhɈgrwo4F n~;щD&=) hVSH)?M`F>Ξr% ~fǤTD=o7yHsڱsTpѯe 8dIs%H3KRi-g iu~+%_R3A! nkE#v{03L`*ޙl҉H!e2&E|ͬApӪ4V@$H$#fɠՉqR%Ce !zH﵌\Jh7D\pl\6.6Њ'KgeFB?/SJӨާ/36`RBDyFe[nJҦΒٌZclq_],}yu+wfЁP yd ~r-҅/mUōG6֮;]L2\o&փk.X h.}]u>mfSS9[:_w_Wv<^ y|`1Zptϛ_*G?n>([;BmJgiڸ+h2 i4o8)'s&ǫhc! x>lҏ3~g\pao?>޸ӛk+K.vUehwjt iZ;Zsƿn/}Aߨ0=G#BsI8k,zx<иZKZ`l@e2RÛ_YTibli,[Ə-tun/U49_Zw= zgl/᳧ߘԳ *r'?yX־#R_cv%T.!u(-CAc bw:$m.o8)FJ4!@ ,<!$b҇8i 4O9e7OVZ~3vN{Jxql`K b\p`bQ8 IkԦEz:`lB'VLzM~1cbHS*˩0YH Ϗ0&asIqQFiݗ ؗ//Ķ_m*WUʄ79SM%Ya} Tv( Zie~P$A*!>Z^ Ӧ!]LH.(TAwcjiw!5{iD[gmנ>Xk\jT߆}iuǗ^ӵa )9@ {EwR~@uQ}W4)A_.^sNeescA<724egF{2#dtkɃz`RPzcJ<v3SFbߝdt̜Z͊{B0=,}~~r#"6Jc(J~ rO)7v\2VlxH/)[;XBҧO 6*XdcALJ$o2QXIᖭe޲^t0poV@N*#c ?ÙVլvEwHWq>}p-9|tLvz)L.7-ıKߔneS) %+̗+Q#Wr]E2Ɓžp%9-Lb_ȡu+iq8b@$ 2Rc9|Z8ŜŌ]T1JF H:G?bD:3d.!ošXwxP񊳭9id/\:rp~:)BV+ S:H~̒ڛ)RDұm0/ RY~??=N2j(-,?ZfkuɨHѭiPz=-U}%rӓL#G+P*K.͏R.8ʏՏKԇ]#8 @ˉ]3 aJ o8{X duKǃn+}Hoz:XK/y9|3C&tf6dD h\"&q[Y>nx|#,8Mr:Y0hڅktWR\{d)-Y)!ٚ{`}<ґR@ps{ݶMaPdtcIsN%cpm, iBY  vdq02T2~TJԠdݡJVB^3^tx%Ϥ Z@pW8Q#bid1V9&؝*rO1$A=;Z~瘁r2c;@[Et;yjCvٝDaAdADv8 ~ƪ7y9DS|)6Fڼ5ЏI`ι4UT m{conU~FOч&ڍnW@?4/wp.*`SQDG A9XDA'0_!6G*GbITd{ gݰ qUM>V֮\^}?\wVI:yoWJA&nڇcn;N}Ps~[cau~] V9_[ˡuh' `aoP=k /OwW?p߰ wb$ZQdr&Lg]Xnn7Y9ئyL&''BuŋysŻʋӝǠ΃|'\q MJ hUEh$wbZז,FXf?Kl s/.Ö{޽0*S.jMq޿e@7A߲.'u\  C_)Xp\  s>eafY^nYcV vgKd7t.} VE ֽ͆]r+UFk*S}wU< [CTپ_{es܉1Z]PmkqWFI[~!;\YdX7G7Gգ |^lPG`EZ r7??/Aض+$u{Qo߿֏ivPmォ~8ZuK+?BĴϧ6נۿ;9F$ rwƻR1Ō PuLz3z'{oݩu'ϼ??o-9`7GpQk`f?{zo׻un9Ъu!ۧbrд޵W_Ʒmٺ06ɾ1_%1{4y<[mZ}]mfq{n^Q~'ϬLp|[ ku v''X̝b\2}%<%V/SAL"&8,/#x硶oMWGXkN]H7Fpaxrnpu0>O ` 'Ua;pׯ0tE1+}[W>QQsֳB!egoި?a =v]1̫H>| U(NSBq7e6,*@cufaI҃v ̾>º~u=7>g,ڊiĤoZdQ^J= j=r@Feon]ΈZaldPIJe|}Sġ}i1nJxn2HUgh"knιJ|`FZwsБCǸ:Na۸LSG 6`O6̙&~čy$Oy4VpGޣeوGcL[d~lNAj-(ټ~g`Ug{xg{`MeaĺC+כ) ;!NKI 5JY034gpQ&mVO`v,OhӖEǧ:_[296X>mm;)@ks/<|&UlL|v^O 4r*|~uS55{+ŭ;Zz Umȣ_pøaӔ0lMS°9[pԁ귺 #k1M87Fe la PQހ^Oj%.PFr(w: S 2~@s嘋Ea{ mZq̴Y0O?dd-ql-fkb&VBp4[tl5Nm>n}\zLܼ1s?)LyȜw=/ҐAc*/w4]1MsWLӜ+f־@=*(68DyR !!o'a>x/#EށryϱM="ð4Ov~HdchqFXb@(O*"bG^9 D<|$MW)]!;gs/Oy 1ڥ޷$:b'86oDIѾ^t8{~/5eu%˨"0|nTn;¼liUkUvb[Tj9^UNax|/M mkKޛq"}>HPUd3h%4҂0N/jo0p %1F=(1=H؋QbL#-1(> }(% Ǐ GC*A3}y^ĕF4,836Xa"xGQlN3(TJS YMY"$c*1lu Xv|М[fSsІxLcb( " b ELTX/zuU}B\1B,4X,(#AiOsń 0U$Tt!uڬ,*>,a1V)K)F4Y" Fofje emw\m|ߞԎ ah^a3>4b:$6p?cbۥdH3EBk ܩ~@<~լn4A 7 G3ftkFF:,|@4JCj|S @|1j~<ثAaQ&qEF0$ 00@B$RAsum_-y-07o? ̵j}Q?uxjL׎_)V7@'ـ $G,@`0<4D$_~"wdt[^)栅S+ԩG"0<9͵QMMAi.iZ)5/8h5Q8b@J5 tS 4Sy\Yx񮋨p)yDZ*kG_/ñY _2]ѬEAq5p\qg8W Pߜ_'\D; u!9xU؆.pUՠ|3U W Ī窆Yٻ6#U~?Ab#YLjw}\Яu+ )yYwO CԌŖDJ^tuUWזSS$w86_T_۫*OZVgG߶p7Zᛗ7H4tjO ta8P%JpBi IJkYګLM{glGTwWkġR6ž4J+} 5ћ5-d5rM\)d5rM\k2DA&jUx*hZ⑗~_jfR3=^kf%8}+yZcD);]Kݚ{[sMJR7; l>lmkqPV} "6x$AD vK[`w.A<8. KRT*, KRT*, KRT*, KRT*, KRT*, KRT*, kGJTA&\9xOSq-B=J,{]]4d탽̾D`gԂX瞣y}\4ghOOB J@PM#Eဌy),9Cߟ?[KJ5IefWSvoǡ)pxU [sWa# m_.pc0w0|72tԂ:R?U/xfYO{5jJOե8 3@?-^0cjT>`d(6B7@,vUcaY p_$咞Cv3`vOSo̟.G7Jە ]ʧ-$V!ݜḪV 9?F#poeQWCg쿷.KE\ճZEiR!ib=<]t_wK6f<{r6OQڂEk VWza|eVR<ބk܂6xfmgk 4Uj>v5C,IY~ٍK|+t|dgmbF>tYb;k5f ObmVo87CܲiAtjpn`rA |(n3#¨{tC* x8p_NJ.A~"'q7n Tn@Pt{QTͣ;>Kiz@jؘ5>^_>Ljy~>3}mߌA NKWps~߾~h΁n4rY ,<8Ʒ gt@=(PABxMw*^lgsW -q¬=̒MQCZ⧳o6ya5j7WѴśUp5iT5ظfpr\6VWP>5Y'xE |(ECI*Tu8R>x:QR>@R>(X(#J'RR>R>(#JR>(#q:' 'ǏQcDcfկ Qb۬h#v &;Qc_W hiFOϚh$Mq5SU}=S.]w6"I@<2DŔfh_~j'qNNFqP7>aA4/Y ) I Ã}.PFR#^1RBM ֿ?j$w[+F w D5MJG3w\ċYV \&KgEY`ʫ?`_ݷ/`1ӄ  t?G p/~}\jS;VN^Bk+.ml 2)siumقz>S8zuSx I0At8kRRMBg錱xc-.vfx( kElxin:VIA=e7ͧ`!,€ ɀPd?7~wLGo2wn-dCkf3(q(7 XMUPyhl4nkdy9`|*h[nͬ|=N#Ǯˊ }G|z#6xc8VcM"sf$%cIJmŸRRmyf)m-ؠض2V@vV [گ v4`w/tQkM,wd %탷C(da^s)PCrIn+hVҠz 9g:{<$%"vC; `嶶 ${Iτ5'Y 1O! |aY D)GFN@X)g,0ʨ }S `1?B%!CZ Q\ڔ`1;??B|q#na~=r?.GTڦ700[uL*EgPMc*emd|~''rْ<:YIUɾLJ6nb@ZCGX!q^C(`[N2B@@COq4 j{R6P< y4EDctZ㩪T5GU yXv{E@RP/}C(`pڻ@#MVy_LMzuP!\M@VX`13V* uq yKT 8 بO}4Q,F))XtX{)9D*z 9,#.2ӹLe8&7!bHɑ FD:@(C({Iun/Ut' me(@0(C(`/!`n4"!吤}$cEz@!z ݍZkM(Ӵm"ԆƞM!vg^cܽ;`*49b%p1N꾻C(d`^jC4`}8T TN4}iG j͌} *tjdlo'-M̺?~9;?o}8;C ʶ]=}"M/tɄZR`nbrJ. wrSOsIm]<3-WoU=#0EE*o1-[~*i۶6hxjm=Dk#Sٿoߢޙh)׵Eu$RPG$İac2eZR["=Vv|ӳ? ~-:|iQcC(SZ*P۔U֧*U*MT{̝ S-Ҋ.,nDžkWj5uQ"bUz`lRbIkN&њP9RkPDA*fIZ0QӤeԽs:EMBFm%$T,nmYiZ&R o:CIMMԄ葞RTt,ؕr mqcsQ Eg)b/9%|>{{"ZZmG]Oi,G@똕2Wqaמ^ e&UB%%s*0@s@.m:ރ%v 6W"NG('@ghF#I]Ri?d|HEb5'=:fDyKƜ 9X\ |B|jys*ҋ2YSI{k%xN4dli9\2'%d >}.h[aN>,QڎMѵ(%b'ԴIE[RHqu~OO P &FG;|s.JHU}i! Lk\mFOQT3 $ԳŢKSch '~dO]ܔ43HT' H #+ %D: D{j-x,;v4#H2P4@ZE=6XS " ^TTlPtA[wZ y9C9ݦGDaDM2U X.JCFnLHgoǚȭ.tt%O% IYـ wvި^1Ls [(k)ʫIҝyc֡`mPUAPCU((|*tJcs'[ZpUTiXYi)@fT `6+h \did, |Der (M$P +*ӉPu=%M%IRȠU %e5͐jPoB+~X2nPhSzk(P,Lh=4v#]7cEfJW}kʃ A'"g#n0!wD ?ЛQӌ/%RC'i:њ` ]kq(9oL:v_ fU T gB@Qā.(ʨq`Xz#kyGD)Z6NjP+mebE.5=() E̾lDTG4-(^5 &n\zDPZ 6Aj"b ^ B5tE54-dgm' _nG1#.EV̺(Ndˡ1͋BҎvH4B6}fCl;`(gWK5GM/U]Z|ou/c&>Hh"X ǀ1*4-J)h+J]e ,äc*ZPjk=/3+XqA{jN\dODUYk2;|`8 xѼ<@= KH`e YLZ[]hQx $nGep6᣻*FOE|B"bygE3ʃ d-Š/k Hb"ҧa|?>wiܒwu4S*#h6㨫 e,%GUԥ)G;} RFY4 Q/ѡB/1 0 zv5t1M`JY=FSBGk .]{EX^2ң!B ;֐h ΪmB[v#hГlAP΀He#kvqAkOEQYIb&#EP] C VA;3Yõ`y0 !.@IY6 4cc;QbզX5{- nwB:b8i$28E45Nw9eIՒD "=ZWOq8<ߟ^<މO|UqН 9-D1D-I?/GOS=n%í&:~0ruֆMdro .[AD2 M>"mCWtK[;O/lL/Mνٙ}@'ÍESЯ6AR+ T.mx?[</bb^G4H!EOUe*`QyFXa8Gcsc~BϱO^nG7VZwNWtM9GlJ݌; [ݗBJouqhP/_o OhXٳ{G):қ_,N?v߯wnQ}3z'zq=*+\wctR _m3bw2h<栗O&Ϧ(}JkT mRI*甼?vJ_mYteޕ;MgsNҪ`hф6F(~m!kD}ۍlH`5\yh] ZkdY=(Z1~8GP޶MUi9ѝY=MoZ2KV_h.UlY[YrFUnj0Z!zkJEz0qg^`ɋJd3/΄sGNɟ6w;u{3ޅwޥ=5΁_΢vn)nl\u͏D:ۂ]Тmz ˟-uɻ95jJ:.wߜBwnt~JS0C/x^x_Pg0_tAuif¹n xl]DM.[=."}n>oy^|`n>աF&R&ChͳD(-azw7k8 PjPty: " ]nPЪgo!;ˡ+%_O/m$I8`'v../hXY҈=nJ4%QeQhуQljVw*?AfI4Wi|5}&rj\(rOJ-ciR!cU&Lq+)&?^ϼ ^\<=/x:cG瀏n&^ϙ+߲wCh d#u7c@ׇ&7{.}+;*bE3UU)D1 ӫfo6/,sY;ƊQd5cU}ғLY$w>l.zcT=j2&|[C!̵..,Ja#%J#m68oqu8ITD'ݛ↛2G۩6K5[0ɟTa?q/oxeK'b[Q^]ټ;v9qǥ&  |7!SͧnX~K1Whd/9۳ٵ,V/ilq LޙJeʂ+s z+ii}gܱ&vx{krQ9ZcUγ{>4yR6{3*Z[KU Ama`CLgl~eV>(;y]|xqNvM/+fxZ@7IfYq+s忆f#`-\f0pFW7WrM;8+rmʼ=p 4t5sLdBKFϡpst3}QJx,>}m<-~qqbSYv@L^ P/@v^;fΙL(HMHDX4OT\\E`c DXMu?|?F7o΂\3UPWiƎ-:ߤPu݃#:]:D.op{o|7ׯwP6f9Gг'̓ȼXS ? "KYi6`͒]Ue:Y$+Wl=&sΠV)='PXCt PXpbV6"p=]@?{e5* }͆jXSZ]pg V|mĞy𫁎ttm]K|*K+ശ{x*BD37ij0fWF rFAสYCÈ ߬ItZm.mךI ,"N&mpHJd 1)alSML:8#3v8Fm5 [FC anސoN<23VO Sl|ZʏlJw:6.! d#f4qoG㩛,4!; `ȯ R4dEad">#kl׸'IF9&rSmlR#w6A 1kT(?B"voWķh֎ۃf= 1XeWO|#Pit#Pt0bmJVբ-9ZMıgpTRe8KYԕ#X'6KQkr9j>vu$]磮dͩj K}`uj;jm֩G[nd;FQԕ#X֨+GmQW@-GՕt$Օb9}o,$sl 7=gnU.Ke5 ΗED* 2$F'H)cJ}݁hMN)~xۿIYf3P&g,1򂅛jL ܉P҄JdMERB"4uFFY}Л,;۽:{`>p-c]n ˠePNu J]gb]v:PeP2(u J].RAHpIxfȑD[z}QIH\tE}X 9uPvQ'XYriȘ%¨Hw{zV6 0rAZ#Um"~ݭ*c}Hc#lKlG~G9S(X1EDTHGR`Sl8O1± BP"/z;HDp+ⰱ$ 12*[XµDD8M4JZ,9m|^|%ųK.q/, ) KfO)%44Q Yl0hUF"Kc(DZB }?i#HnYAZZ[eaZ)#zz쿋{Wuo' ?\߾ 2;HìuBV[nXBx:Cv2BC{Gjcګ_exe1Vf4[1A(R,B1!J0yj?ߡߏ.wlP=LD3NߙGSM)`/ Nı ߊrp7"Q"R bSfvz3ޑ}n7 D䂔BooQ+6z@%v~Pj-ݡiAiu)ۈTxi/\E8봉nTnKC 2YYeH0.n+xkr.Lғ[1"V>;?N)eA78l9UMXI? {uzaEa_ w:#tz]=AW՘N&\)%pEaJ, EI%ga1j PH@?IN?Xe$,S!j#5!C< cv@M9RC 54(1HjtH5Kͷ4ѓ6̜1t, @y.:dfPxm![ ΫȆNz8IV8(X?R"Q-.;,|;meس9*EWQ P5GY D ,t4}z#_̉oPp4KPP a!TB4 _o?.8O}蹵oW9*c.ً T*26Df2Fwo.j?"B߮L* F I ~_~|qw%lz/IǤv\c'^g)F$3?c{ #P*! Z @Ce/z\ص48Ķviɳ3łs"X!-;)Fi t?:dMNY`SMĩ> }I)ݑ[xkH\4/]Ɲ#>Z(buh"PqVCc'@e% }:/>s QasuDQ&50z+E8Ot#@  vS!|;xGko)ç  %{}6\>_{'L||uhN(wTW {‰uqㇴ'=B?3u RMG%NqImRW@0cQW@.Ǵ-Q+ٱ+R㝺:u%)^n畧&“{B|cVi4G(@_cJWщ<2Ⱦcka>!ϔ7x/Evb_*}ARly#`E +ANwG$Kb0.L&y8Zk;%(kG#F=e\ZsP%#h;({bD65QW\ۢ0ʅeNQ] Ŋ={qFq7gA_q/% 0d-d?%_.w OP쓮#{=m0w ΀w-.wl4\juR5/Im[Y4Ik*A'q<<̽A' zZ-Jr↙,,]췋b m}9{^+d(S`CgP,acIߤm7 M&A~ o00[kx A:M&A~ܺI$oQ7 M&A~ IUُ~>wVcęT;$Pc }C/h;c5NݵWђ`5RƈK)RG8f,1#~ž1bEB!T!.9qE|soߕon]ٻNrv52ygg8pjp-\t1Ƿb Yq+'\x'N B`%h0ͳP1P-qO>L;=^khaXRLHzq,uwgKSmA K,ђ1c6cb_YC,"[s9s1kO剓Y!!.0OpIIddNi  &"Pɹc_( 6iN.ib?T qNuqD=P*E֯#G\+sE=31sZkz1 kA3ymRCIP2sm'JkbVa"jon]_a 9{J@0fs% @Tete,3 O*q|$x.8*|Shf 2` ~\fFcq]ie'_n_2\TMղMAKO_^%=|LݣR5*eAl{[RhT5oO}16fQҮ`yaJ`x`M " 2DŽy1`38jL՘ 9o`%wIl<K2s4ߊuRu۸er ]Y*HLNzʐE2d1r}u)C2d1*=m  MʐCM$56=3mz/=İ[a>c=ӽ]T:jzf= &S CsWM߸ #ĿKvg !w,7I CRC% $!)%i;*d' /C`AqD-g`fI 78@+Mg"R l.@t*DO'M?8>Fsh Py9cSa$QDZGZP-" 7ٳ/l'0SN#&_6*]^Э XAiݳJ>ܱ=161V>HG\nfݷcq-˥_o^ߨ?疹.1>IȽ8qt8FB~\0aKx/a7zp=o颿c诿qdV(~߼ ^n×[of|&@&9bcZzo΋'ckrR9Iw6:kޓAȑh'믿{o<޻m',Yff,${Ad'aߘ=WȉJrNzf? {8\r:WVxB kW mMZ5#'1Ɵ͈Di_U*s-Tä[)*x @ n= ?6,8NP>2\gǼkp|(77yU3EJz4wOMOC]$\^i٥\(-ԯߌHwT!g9mj} S2`=|K#4e|_0ig6I(Si&I[sBTgcU'Q(9BNm߁^m67~. QNEo@>{rk_G9m/QQR Bf=gtjzT.#+`OJ;] OaZ2=]b>| a㏦{P\p࠾ o9Fi |R&u_BZ{ujޖ#,Nm 2!0=GW{0^:1I0'X~6:uPi&&),&i;G (*+ˌf¤`J @ T[ʐөf ?8 <ˉ=Oʭrg R{٨mR[֠0Hۄ9N5w`}|fiqF$Ә|2AGyt.+ ^\=a u˙J~Pf.)w`GwQ=s~mʪ5KV]=̛dy] ı~=Hb<3ƨ<ȎIjb,Ai)34򺕿R*=S]dQ'])I9:vQuםiLB]=(ҨrlhV+q9;g+vAZ2QӮ@ܮ7lnGE2d@[Tv҅le[~ _AwW|i(]_;#o25Č(9(k!|?{;v!bg" 52P$%ǯ۩F+TٖU &aFDDÛĜ=jihE0!#?-|޵Wa$(DTDeJnd0-"#2M$Նo9bRua㵌(e:ۤ3H^b.W2 q]AT*j=5$ O RS*ݗ[zrD;s2< tR ֡~"IM.=SGGIs\r`(S>T|䬧M`YX^0x˾Hz;L5:d9zX1v?_bDURG5MxzYεT0gYZuis_oWm&RTRޡW`+C+C-+CW+קɃ^_>ep""t0?iב\_HKU4?_lj֝;6Ls qfc~5"5'b#baH9A}c7b"#}2LD4sǾP9n%G9U9}n㰻V%Tz]vØIU>$R0']0&IIə'8uzGZu4)=U:FXDŽ&ZJHc1.UFB`M$YsCMhm D٢@~PyJaQ>HN`8eNqn0|f.;r>[ttSc ]-Lb\JftpS8:wH{drʯ?/I~Q75\)"pqKQ'IS,h׊wx5\X8-y s)V;"=S 饴sgS{<[TnsTu] ]brH\(ayt| :$L aFolF:X̦}嵑$>s[ޭ)ggr'jV%MUӿnOM]DlB㕃+:E\ (18 iICA>R@ߢ>n`$" o5.٬H9Qm{ɶ=.]<3,s|ITK VdQ[GR^s ^}3"ܘv$ɱJb|{`w؍x@E =!R*OȭomU {?Ϙ ր$x8o NXd`ڇڊ?5˫⻷tc1~k2qCO¿sPr9/r4T ^ժbgoX'rϖQaPaފ%;NK%IOqk}QOZY8Fc\9,A1Bp>CH!3B^ lvq@]"˸Z{)[ æD X Fo:Gq1yy\Ӈ^d_/+_`]lo +W`Ƣ3طQtU6 `;e"F15_X0 )j| t޼8`X^^AqvQ}2Y"v UB SUg@|ڼr H#3(RAԩP@]=jgJ}B6{&|fRc ACs&Y5#^+$}&?N'xڥgE_6Qo(J.&4ip?h80 g2+r5;:78St=ס?8%G>5KGKl/2|>2QecL#eODT+Ǻll^:2m,c\ԝvw7ˁcF>n|yJ'z*:jZ[xiU"%X/WCDezCCҰ C*cJLPܙl3jux'x-!JuDܝo(;c֞ͽeg1]~1F(q5U 3%=KpDb3vBD&cYqkv؁lX:τB)bC qi㹳jǘ2MdIʐl<ŝr^4`ݝmz2O0F2Tk`R韛S=.`JOw( jOn:Mf{ز U2at(,^bK ,/` FaCg=ɔ!{`$7d>@~rL;y<^`7,z`_^E z*WM e%:1M /iƍǑy`)QGa(8>XAuj;#T%w)QeV!#^0G?)@x;$||1M 8Ny͒,% 6/ge>Pr>[ti:zcٱˮ&J1.%3NїpSd\^\ `Hʅ^^swnj`-2̕"i>+rIaG$MHѹzhq\sujN=5e*ϕ7!2GxJN/2QĞmH=Z̀|tv&)C1xR#01yovTBCJ ?;';͝t{J`Eup;7.( `z/0 N7*Ϸ(O>3/-ɤ"^+dV:2$aaOj~M=XߚLla|3\΋͵'|WJ#V;鎟10goI|cwtc'b3 m㘟9ɕ1ҽa#YDȖ݁r̳x Bk(>xb7T kO&UTB˅B{LlE( g(ךZ V u` ]N7g0g*O=⡚3TP1r9T*F#C^c:.cЮEca}UOiJM?[d@0z7wo+wo~On]8)Rr9hz#-SWyE8l/?rhWQa :/c,^U(Bm g oޜ ^{U@$lBk6nD^J³[;u'Fft;GŷzT6~V=H`ŨQ ZLro<F>*@d؝d'#j|1msJo鬗lkJMc6 +#Ε>a!-D]d_E ֪RKP `r rsTЃ.۴Lƭ 33+K0d9LƳk:F"xd)1Z m;]{W1vuwCB_O,3m=)W@*mڋ 4`8#1 #$?50@PG>K6=ծbɣ,ä'AZ V&ˤL="fKNݚ"Y8Xqt5mH* 1A@ 庄.rư퀪mxZ>b>r C O45׭-eo\D S!2Md:讯cz&4vtԡ}6<tj×OLvS_ˊ OT=5dp\?U9ZmiQCMvLk!^UKAA Ϥ_2w "mʼ?o*F2jS?2to%pon5Uc{uN"sy cd`㳐nHl!%%\ιmgQfЮda[PUI2U)M+佻ͰT[^Y)PThl 2C]Z\ޤG:Ũneɘ-j+1/XFAVu5@e͂)o㿝äNhsa?v dV,D8 =)mZY䴮֖k=Q|?EȼyG8,vs/1 w͝ZY=ҭvsӫ#= .Biz4KA{|j>0vT1j K>9|It/³]ޢBz 9sz ./h>2k>;SjX=^:gdWb(YKsXJi9!#%[6l3H2^N[OVX wm,&8il˱L D'RBlKQj^ǒXLRQX+r ~ҳ9Aw1*q6˷zXuv[jԯ_9Ww(rAwzma߷LPS0 XҢ&$. -x<`N!l@˾! drd![ _ `:޼f"=h/c\[#  j<|i3o_$O|poS 'Mc&M=ι Y0xqpzb$awDN4w.-ҷcj4q<Ueg%z\cçkƣnO;Cv!&0<$}ce\ß`/P4P3:U4]%?L|B'J>LH.rBKu]yKЗy(=Ęm;bA(,}w?vXevH4}/(]-A U|l+ xa7ϣx| A5<^i|^o} [jpLkRfQ7{~~]?X uowA=Dv1w2Lh<^GönC)@#]à_A }/ʞw Cj_%}.nd @h rߴ Ҕa?Ǟ$0(vTﺡ\yl MS¦ǽyI[Hn5%)w{@ww`Ώz]_:&S`eMNGҐ軝W[M8R`"/<$9*)C71|crOIZ> ^_]ޫfWʆHnz'nwѧY_)MJrx lWtsuʆmMvSF_)^ Bh +I8fǨ(gˀTI$KE.[2\-85I~za&֒`q@)\ 1Nz!MCd-=kr2_&ՙP+ipWiYz7:e!}orj7a޻3gqʽwG?<އjg;kƜ}m]a?stpx|^_vru\4vT`^J|Ū)RKNT8A n)+׌ 6{GjѪ'' m:д;#:捼_P!+S@-9@7zsl=7QUȩOXvL LDo#ja枈FdhȺV {H2BC1pGL2ۚ= R]} f{j%cXc%VR5ƇýR)l^nT(ooB4UjGczքb/jy8R\d2􆤷x\>l#-aeZeVUf|ftw#3pՁ( 0-XP-A,P(l; 76cM}M;c'}~:Frƀ>#КxaumߩL_ ~$Unuw*gBk랷˹{ S P%2wW.ޟ^_ZD.k7"QZ;0Wxvp c; ;y-z!),qZ\חg.O..~O=",{\KοȔ]=O猼r_> ˣ_O`󳋫tHJ~gJU i^E>--][LO^`hVWGE:aԩGW']OZ-J4pt Pkf6b @Pz,3 V(1Y-`%*pru"I h[miZ7Mm@I͠k.!撼ՇJBrUXWFRER Gݥv`ҭ0g}+6/ΔW/c+{f+aO;n–s5CEP y$ߌwUȠℬ(Ȏl5P,Af/0JqVLJŴYh-纹r6"df܈pK _ ]z^`i^\۔iFl"Bfde䧢VB-&̄JZ:,WPb&5!6Մ\Tx2llOC8#y]?¥?5րߵreGP8gXsh07ڙiYuk  Y% '޹6$g 'seNj ⓸(S!kAe>qw0R&হC=r*9Њ1ao0& q> E`9b87 ӗ3}J0j/*Q:=}xv~%l+tF*dK{ލE+- !n@-@rX mmoO4An_U 0φ~wT?-0?W31"~ ۋ_>bW%ȸ*bfŸOo-dn>oї~~U /es66Ī"l/.>f gTA>)(J0iZMfI/aeY =Yۺ*IRXÌnmb'3/͌R$5AON( `J2 kd$oĿu?\WIJTO |6H9.;"F)(Hx.wH0D qk!` A#-V#->]qk+DbIxOyr}x23+N@;YҹtRY!h `%vF[,SV}֗%^[+s^3۶ )ɘ)U5}hv"y*+lj T#%;Z,+M&3kL[hf5X7]qK|q^IKczok`_wexRjX$ot:UYיT&QUiAq*l̕25^4 ͷ3z9i'N\7Ve*Rou(VT˵*q NiS)\հ}R+ ڒ t}€! (C!^TX ͡5m4VAJnN2,vx?%v1b <sP^`[ml+)vhq7+KC!CK0^S>oB2OATLQV`ɀi jK {T,BG \2KCc !s9,!!eء"$?'J\gm°*H +0Z6PZCiuRVSX D1I6xL[&S^xgk穹Vz8g*Q$sR@Kk[I){qvEkO;Rčʺ5r=pMp=hs.j6%`9ɮuڢ@,ỳ3 E{^V'کG/Ԧy,[ -UiYglqdÃ$Hf QH:iҤ; Ik> 쮑 NÄ7v鉑0cP|Uչ$;F>Pqtl36R߭ǗO=cCk;:ww_ <|[DzY6_u&\hkrd2Xj_ qS P`?{ܶ_n;cx4wML{ h)U= )$Rz["ABeC6fSA%G/0<9.:(#ߔsL(PmbCO !ХLLP"D4&uvWY*פL}{hEwۻN۸jr'mW7Zo ~k6kȷT jiكvWnut褈'+w]olkK#7;6ٰz37SV'wɡu^=3I7t:mdu:čz{Y͝NNz.\O.&wק5+@س|CEꢅ2ڝ=l"OnĤĬ!㟤5r @^j KD@c0E[2 M|8]._K hҬqkΘ$vfK8mIל*Qו5׾MLa>Fzl[+6V{% ~xc, R"*yB)0R1E"91Fa+%CL@>oii#¥`rcBPD1ʰP:)VDp%;V ǑBбNE~JKLŸ]03G¼oR5.;w3lU^R]X>ݪ_@˼iuCt.s2%lD;x , 왛m~{ݦn={ꥶi2q{Q@ݸ<X`f~?x,]>?f;5 `> AUz$]9Gƃ3c٫JL_$y1gs%1WlĽb:i0 iۋ-"ۅ#1 Zwq-js9w+-"%`T/"a#?uz̡H]JA)L {wg֔3S~qo7jbm.,..NvE ǔ6eGs#Y[[g3ȀdLZRGXZT4jR))E>?d!.99M_)ോ(=p/Cte5"}3%j;fqZiiw}j2OuAO 0@@hX`y:ĊjoQ`J̔`~rhE~G|k3*b i`8@2 ~+bBA&q-BF"%Aqƶ짦v4I+k9UI4{%yflt5}>^,}8zΏ++\嗥#3~$z)y679;;zQc#`%6 &+8fA˽7 h0i U G+8baDA$f(nj')m=EG3 kKfQZUn* {̘$U/~xapxƦ֋Ynj d':>Z*< u4n\ǹfYT{o=ο1@ZiK̭.sP 6ڽ"6튐v#lwp4R] (x+ -X5I~VChMLyU )iwր=InKmD(І¤)ƗT@ED4$ՐynUYQhDF6&d5 jjm@!hj(^S8_\$ώ$@~֎tlQٰ;6۸AY¨ S6bf ׳kJ+@=n)|K-bxÌo֪_&_vHS¦\v4k'dpքoUs}n d)OZj\pjhA9孑ຑ1V|¡¡¡}_q%4LjNXl9ScdbG64 *,$Iݽ۬d}xY/&Y:YuJ[`kQ(3+34+ԍ:u<_|ʉ"{@4&eqx\;jɀTؓ.(% ״ܳ&S+٭b,DBRDaII S"&:F!!FX3 9}ڿ0D<@?yf Cov|xxMo<NOgZۍzǣ;x[:7 GNuoMJKsrRݭ$[#qoM@hg4 *@hviH<OS2Aw;fse|wy ǻn7CLtfwN]rMfg{ Ff>8L;p{tw;윾tu>9}U76/v&ؿaV4{즣!0ןt :]_N+σ˟3}`itRˤ$]gӷ0R23:I7f\dn3a3LV4m־:%\L`fxhIwnl͝-a_LOR5ss"zz~y\3^ ̦= $eV|_' ݧ\W*Y2|tjsdhg6+ 9;IO,1Fzɢ]œ+uo]ᆱb~2 [x@`<!i @0E^K5hs+|c15)jfwo}%FOv0_<{RiCd )?#L\ stX929Yrsj9Wg(@\7& jRj 0jY= qWW~D &L):qˎF\يో+7RL_+!qGE`w_>8]ܜt?_u'rZo7;HÝ}R!›q<(g,˼sJ?{_ dNEBInPRkXK82IQY.8if$" LZ 1^d\`1`hK#b.}+ae|JaTZbLc3荠!@lC h1QbJr;|4Rsl^uǰ0H;=Mp9eH* 1<(|}&B̘$SPyJS:;PN0A7jSOo~~2x7O_<{wo7gjⶕ⒜$m"[[Hi8@^{ #2ٻ\f$] [`h$͌~3I,R5Kh e*+C*p?L$O{I F~ER0',#%Ǽv)K" V%g«XBs%ZB f:YKۘq! 5Ms h}M@WII54C34Bǝ$9" ]&9a}hYAٮcrF0|3II"S:=21OysvCV5<ș4Z?t|{bl'M a\dxtQE+#V8PiBW.p9`'o{GNώk$}Gx"`w·Lx>E+<QA{cP5M۝"bqml$t:7p:+"%5dL)KTyp7bB45[wP/wA]+:޿ۼNX f~aki F-^bZʎ:sINaPز~&k~iarzKyбC}nIRN5y,ּ068#P1-O*z9YSK{,3WSlAkj3pU\XI2Me3SRS,iasCt۞RA&gnI4c2pb?3JU611Cn_ V܍fcՇi|Ĺ?Ⱦ;{{tFU>ZDc>Q6{Z ɯvZkY^Yߟ4urN.]~#n׬'WdgEIW}Y/2%~z,Y@$zŒ"yr)]y6|ә+x|sd\NN ' QWP<׎2 f`>Q Ggw-a :1%,JunZ k_ٶ0f #~RAVbq>l7ʩG` OZׇ͓aL`$$s\ 1m@ ! $!~&!0"^."{!ʟIx|[(*ccͷ M-P;Nhgl &/0ixPcN=:|sZn1Y.̀>-lI[><ig@8lxB(guRdU۔q1HFmB! ᳩǯf)fprMwqa;ےpI `~m/=3 שmS? :!Tdiێc,סFH€D=O}Wk7|&e#seiSvgC-l8[KrdR0cz'Mo ]׶|n;Nd2 YG-t#älS0Xqg," p(% HL&&%쟛14eҪJ j;Vszbg;{g韽}jx -ߖ_ہ"8w^ß;g쟼G[;,6T -9V6T,K',Qa׽h`Wۘ'*zS")IGmuFY1q?/?< ۚ~a~A1Dr oſJ4%],WP+pobl j Wn:w ߃Fi{sC3`ӑb5xijplfթCt~=ȃi?Ua*l7kyz3%> 2 xhjCW6Ld@`Hn45ZPb2TItyվӞilOG/%^k\+Z1O1<뵆ѫGVRX)NܓiH:{-btSR `:޳3v6u:/t|ܿrO זnǟ&*OO.@$G{УْyO b,Guy}GtR)= c 2ki!?̼Y/K&tP^jxWL!eٔ73f[fBE[@ BdL,W]\1FGZPQ(*[8,Oy8Y@lKr!O`,l0LaeU@Jg;9d]M-18ȿ /u0ow=,g[c6YwwT>aMmsbHr|0LAVSng~`YpCa:3pOG($bGfgƼRG@ ڙ\˲CY{/t˨E6D0er2mNwGW!FM^?tἎoIL9b6@A0.a!xI68Iqp/

Rx'S]eEiN:CDeZܱB'`XU =r\bF& ]ZEqZW55͢HB#qõB L3 Bb' 90#\NgHU$!ry:t -0r)C⚎;5CrYԊ"?riRCؠ}܂%/xH|{_ʇQ܎Ijd5 E(#,XAL3`w9ީ)hSVamjhEz1G1gjK{ilz&H.C_H͐'J5<ϲ]עuSt)nݫ BϕI3\{\v򹢃5m>cT[[6- HuJ8Tql<|(&) %EyĢ oyGtxx"OM%D*zk<,G`!4@Yq}SƋ4ooT=fا#R&o9&:^"-b6:qeXL\x!ZTϿ+~.E8 qb4C|'ExP'zot˲SU>͉"_6C\/7M(\X _<7\)0}:b"-xʯXMSk /F̸pL89mSDh2(u[O&qq|BHہ2`0Y`P A o6^b5d@u>y` ,OyYn"(̗]g 7"4op B7@\ xMx(_L `|2#7f[(Q'%OlDU UvWŜ+CƕK+W>Ɯb99a#K+ "n1 1:eEy0n%)R,wU} vgNreFLh$26$S,sU*ŭHdع;ң#+i\o ,:b^8kքx]%)1AbX'-(1iu8cVi2d3nφ{%XES:zS֕ӪZt^*]އ׽ҿkG*=g%ϲ0sRw7ʛjbͶ@Hd%!$sxxK`"ӡD#EဌyPnEDu2@г!&P^э7sF4 J Yx! LB//YF0hx/vHDf[쐳"৓;n _,b.@ PƄp©I,S#FkuN,b1˂t(殝ۖ x僺C-V75[  [鴢ZN1][K~/͎ߣ kxN[LIIs77p dGI电L{ 1u^f07\a\%bfeNajYeg t?F'g ;y(P(O<;JƀpoUt/oWp$Od{3s^'v1=YTC!WMg흟{7L^nu]߆όoQ6T?|ئ4'蜾8Yi3_d-,ê9zD]vuҟHqv'dֈUENrZѣFE yR/ΉS(kT2Q8bTnqգpO{O^/n3[aERW>!`/'j1R\t.yO( `@AN8yNWP$vHǀm\"0 PRj,rn ̙CNX`"3F+BAݠŇ R1#{vfهyAk/%y_j$F+(R[~[lQ_z@0gDgA{(X4@,LGhR%#,jz|?D7X.LþxV΢34mvXޭv;x1(vhXb#VG#$KS%JQ*d ylk̑ p0uoV_ES iքL7= ݅;(Q^&PT N9CZFc L#/6Q^MTo!=QD5Ncp0u# `T",bS. epR9*;oi>vۦ?ׇ$lN0 ۳Ua[C[SNؚtkiƺόRۭvKN{׃qݧ话ҭK8<]a쏷S^) af ln Fuv60nn7Mg֞܅qZM+V4퐚vv0(\Ga-Ձwi8œ_y:f$i,52HL%~4&J;JI!PJOL 2&bT(.R#c&ˉQ&8&nるa-tvk@6֖V-7򌠲<nVb9knMd5 s.! 9W(~\m?jp9GfGف$.` DI##&:JR~,mҳՋ;ݬ0JeJDcr<)ŋ}^ma8Ω30r$st89IHBٝKH(k֌7x9)ͳԄGAxH${ѿpv+CHd|I3][z#w Ҋ2D:RX.x6VB?tUjȆzXrSLu,T*d /m菲W.F⼟}Q " ̢[r01Xh9 \#_{.?$sUI#֜fuKkb9\Bޢ~ :><]SFXH[Yhtm}ϳ~S [gѼe o_W@0=ƧJ2F?Qr37[K-EM8b=٘3MXzB6ng_%+e7@濓2kka!f\'_qvQdƽQZF<~)czH4|1pr7 _pvĞIKj@^k\s=}zam]gsLQ3}#4YEy ?N:=ZZ`EH:b8@s`pc"ؽ ̢%pXb[ DO׊GqQ넕N #>j'Q\)k#;"|zF@Is4;OSXUm_$&`TKC`CPPr쯎,(K8Lߣ Cx P&mNNN}n\ɎqJoj0oF|?^.G`k'ߟ?h9h'aFiXzx5,K墇E>XQ Ùu|g>9kB'h1\p!?EX\BV0h.F׷mv fn*{3vyh\gէRZ5w~3&_0yQդY >}l֛&4ӸPgʙ:@{0Cͬ3lvɿx(y0B2X{`_є1aq vĜ$cHdlH(X3sUF~P駎:8kքx]%8߼3&H e0fu =)61{j˩ל׼-:sͶm+b9jN))} Z@y9("J'˂}JZ9oZVY*]iٻ޶-Wcm(vIm6قndId;ECJ2eKiQdE 3 Gܯ0|Ͼ|]E8s4 _JЛ%9={G=F>f0mUG&Jw56-n˄fA646qӋ,,&732޿p3;/EgƄaf / a1|w'\lL6 iqN- T)hW=q5]VfE7Ybq|tYDӬT*  uҫE-t-MʳRQ~pIsҭ [OV ;ZO'sq+=[-vwGHBDŸgQrNI0S0_pVKGư@&@[˟ku XݎN~v\'^<)7cȱ7 ٬x$:Y6v>α/`dx*ei*}$H(lo7#J] DM\1(q. I*eSmm*JE $,#F"RkRcZiS֋=Ev<v{us_[o^ f_iؘܩ!xC6k:(*8#E9Jvض`(v|qMZ-~LbA 芒tUSLOXdvBҺBWж5;tEiEtp3 ]D)dOW+Ψ6!FCW ]!CG4=]uV & sg rgAD~BtA0+̩;sWIW rvB.ҕVg e\+@l;]J-{"]i qI]!`a++thbBtA2P62` .'WWRtEB0BLBWV~#TCW99]_©WS~Zֲ:( -5tTSI'^&e~!{B,=]ukV>&\w 䤧Ustt(9骃t%nr5>8Tg4̤NP_}I$ۿw$<&Ċ8O/6`O( !3w'd^qݡl70x4_y[U_}8ޤJ0H|EYwk[aoM4ב xC{zgS95.cl?G/J.ΫzҲWBvc}BdfD+Trg*l ?Ji?E.&3o]aG]/[Zw| !LXx@c$@_=3v*urr;^~(u4tIB )˾_o Uk%٪~d|Lދ0>P\xq6!ӊx* 8aSŤ)>oI‹Bgxd!ʀ?u)>Q$gͽqvQbTa1iW RDb FP<}'dniKG}5<yļS_ӊTpw]J:p!ihV)@C @S*˩mPǗM4rWEPY)ϔ(-mI=ދ>FYuW%A^9KԪ<"Tb;"b5NsIJ4ԁ3ht+ [\ખ;xkna:1ד8\vJ #6&Ry7f5Jo1DTʤ|%{,=RI>?{׳*d[ ݺ4rտ}lKvz#a'ʕ>hiC lEVZI[Q>bDc %-HR 0u~ˈ30VV=]u@X>B2 :J=ƴ)oχXEӳnhV-Z]Ղ+؉ЭV[ "-SW]jzj oA0X B>ZkNWR2UYu0WBG^]1U"NWtE0+̩; ]|t(ˤUwJhNHW jFNWUJuF k ]\EBt(e?EҔP f:CWW Cݟ%.ҕ*ԕGI]!\wV PR*{ ]YriR ]\whU Qճ+^|@G[LWSuְU=e t{z)VBBWI Q2UqBs0•5t(=]u8gZ `C3tp3thi;]!J{ ] .6'Ź\a\+DZ?w( 骃t%k g;m+Di`tgJ;DWK ]!Sﻪֶ~ P Ow4VGW!\ɏ+C(ҟCJZ=֖B3_鲌wY#225} '#(E4+w"kг[G40]h+lݙ/85rvBj|'J.Kغpuf5oFCWŀHU-L:WxyZs"uU %'-[5JtTSEBVBW2vB% Uw).RWXui0Wžt%K$%|d`ףl?nEO~ ^MIOyY #?^MiAѯ_$^PcZgɼfע4X$֩ yI)ɢ/=sLiV1?1<8oh/i XܸF<\>ٿ޿<~_?G}xϢ仒 W F3HYG~-r֋5+ P`0n  p~_xcܫb ߄7%ڭWY3v8[;ևB lh ~>N[ԧVm^XR69C|| 'wD8 f'7h߈xf"HҚ$22D<"4ᡍEO)"" ɭ5>9/WpV|r栗ckɗ!t˸r73phѮ^&(%*$I2{sE-[[64r}_c ;M_xr;.f 덃RO-Ef.?G޻yxQQ¶jw.LFWɏrO o媐dɪj}xT~]/+]gMKcAsֈ}4/[`{Aj胯߬wҬUޟ4;Tּ?C187zb܏$L>y9 x 0}'eޛl]9(ՂCz=%3,(!ڛGSo1(/CCIʍm\]G(և dAr55 VOhsMHa}C"3,f1% &J\<[ٯOgM3O\@T[mt$- JMRo*ZA&@LErj8 @jS)eV@$0&&*"&Rbk+ ^Sדrd)w,fepSFMP'| ܛ; {PD^2!7UP!&w1oOҌ(4,m"oʻb?_,|QfY>4+MEcw]yEKn t]`2f~5iJc~ט/E44a`Ub)U4HLbeaBD̤I@fLFmiD&|i#Uͣp5mw9ӋE8?è25t⮵fwDj]ݪwJ"`k,%cGb>*J;= CM?4^̇q`W_ZoI6Lf|WrO].=F,+:$w'p ;{|/U(e%ID),JGa3(EGY[˟%>JQXʱV`(\j_*= ܃[r5@]WƖg8g+<jF_PD:68O:oc|!~F&SK4(~?ߪ8WM\H-1FD&4aߢX8QjTleHu~;^+\)ÍQqޚuFlvO[5pgUs˗;%ׂНE7>BD8-k̶xg:s_:x#u7^o> I* HMi-2Tgԥf =q8Hm&.T(NROuYjc CBrd@lR)8,  "{V/8U NK˙k ؐ R8Vn?ر}KoB!'`u-o-,C:}[vl~9X]kaڤ׆?ZK'5?޵0q鿲QGqy?|r,;KR͓B0D߯gw U;_Tj==95`%JAVVtv+˽ ;A9!:pdz)Pʍ"6mȰ, Ƹ#aREbH1m 2m-*~h@{;+ڈ"~_ cބ5-\-h6[SSG;/zp'6+ ND.J7g\`rPigpt\%*yz4l%e&#QHYu2LwZ D佖豉hj4BZ";YMs"7.*pv&% /cJ\:(& |B5mp:*k,f8903';F nz1?e/ 7;iK +tƓvJ<ʜp. Jt*I,Upl IM ר]QQq!,$S{+%R >^5ޗpL4,GkQʬ/bbrK%bdF/cLoD?u k>H!+’zՇO~ 7C[[Pf)PQk %I/P!`V0/20EJGt`ZD"rR5lVg$a iU3@İb A|@N@ւc}9AFr2g$cZadPg($^2 YobA% 9Ij$f.jBVG[Ik%]JAK^v?gh~sS[L|F'_*}wus8>n2!unMޟ{};ҷÝaHwtw/! |Fl#[+b_VnM)7PGm-Ɗ)"%[m(#6",#TJ E"#{F$\.%6!`BL1jveZt^jF~5l|dr6:{nӶ Hut{0Ne)R$J.<7 ӯ S,w#քզ[!dkXuˉN05 GFnqA─5d4&Ҹ@T [d <|BzE_PԸS-R64:&A/IsfEwZI%HD0 AP$=rqcPB(~S]ra݄u .:e88J ʈ IE@$ctBa?CA(_\]4m"HE=JÂz- (!!M4a-BrU֚ B  5+H{pΤRz$W)L;WJcTkjH|J\?tY%bD;xM?,I̝6o_m GYz1/nJXun p+cv c~ˆm`+|;4H!ٴ'S rڣ|" :>42Oxa@{g_-h;K*ƅ:ٯTt0"7Ǣ,˞L{F2H~=v^q|HJA)E8b kq鼾7W  AN[lB"MtH\5/a]~ooP|X^WdĮx->dzgP:803{*fE:[^^1ܖL3%+kiدi*4^&LrJ)L"<1 cosw_wi9F77{[j*a[bzѮLkP[9,sn.`H*\Bpy"WC)\h p,\.3LiRhT5 u8gjKr,U!S$hM6鶩Hx{xO;QhH,}5i*zdA-},E#JЃ;0jY(<̣$?0bQG"`"RSFDD b F2"Y8H7t} v}Y7EL;꼴 ;,mz(2>|eVnZKE* O{DLDX,-,xGI4 Hش( եg[~_s6; --hq{: ~xID2[lsn%t\<`BB6,IJۈjIm6wN}nNdCm:.tfp.Bj}r .VݭPbթRuib}~3Z<@M5S1 1~K H' 0u"|ͬ:P ڨ A=8bTyZ{rޓ.fbh諞!7j(* oC|O@.?*Xjp{A 07|ܞxYB.S^Oxst=qvux>JEy?\AU/YGC*6‭i-m=3{9A t)LfN7go}5!\{hTW|:QcwV"yxRO҂e`X$ iӧm>*4=,'8"7=Pa^kЧ9-r͙A6ƒE ,-lO~<|rIw!`N9&r6ȹQjEw9lVL#VDaq{cfg-2%9ly[;.G4o,6f,ܚxhJ$>z9#ZKJ Z9G^liyYGH(򹖜N)bxЄޢ# ct>za6*7+XbKix#gXcfTLD.Ӈc1Q+ԾXLTJX|9R\ w{Ug{;8]='(aP)蒎$v_btLlO9{K50A'KSفٓ? )Ƹ*7qQyM0vny.H/3E>)|eg^OYp 0=` /_^s]g do}~oWc_l~?ݱ?wm7 'ϯgw9~˚8x Eq_>~}|wNфa#I={~5Oπiӿk"{>;ǜ8ޙ)`y}g@PVFܖWSܥU~}+">tV?]z? z]w!e ! Q_& ǝ?.\ Ii}i˹{erSm+Ukmy0׿B:^^oa39-hp1|+ny1K 'eveOG05w/[5R(ӫRHA>jY~Uͥ߇r~*D X>Y(OQr+~Y+GV[BSL̷,/7d lJW7FmeÒb Lh}@" td'B8Ʈgp:"YOnbxhQ|ݝnJ9dUb t(bqbStk%Vξ>urL]]%*gOc1%".uD<ڎZHj;*垡+⭺ǚS&H]QW\&E]%jޫDzXn+Xgݿ컗0/Uf;)Tg3ѳ_*v jN;ߛ) sːH{EKuphMӄTyDG4eM.6pi%~%~%%~%%%%%KK%%~~%~%%%%=e x.KmR. \ .mcI3"&pPrz̝ >Yz5?7@.9\!` -%& q0&́{Ghb$+D ;TLlLVh̤tY.JuκVy[y! |R/Vm͋S'ʊ*;M֓Ig3Cr%-1PT-GLcS= fߓYVDX\A`%a$Ibv.;4>`F?cGzС3~MWaBn$N_ ts;T`nŬ^]^D `iRKk7A@l샏;ذZ;p@=~/HR g6JJ%frH. 9* "l)SN9|/bءAΔŗ2|xny/4^,A,C, V`2!E P4*HDYWrY9Sb.,Cw$aE1X1)W@Qf0̢Yh`k!Ttb :}H؃ݜAdYΘ!C\&RTA )DɆ6FYHz*U2ֱ?:H i=]=Ptv(y8?|wA9Ԥ /%g}[Y*C0gdʊӖMi5y~fBL5zM KjIflкpƣqI:8X16Zac^.3!Jm&>g.KR&N`]T[o:vkr$R.vҳ޴sgf:ȣ[ R2 Lk{͹M~s]?>fWPOnw,n/0.כ>dK]4hD4s P^xL|5Va$˷x4qe^#x*˼H"p~6вDE *R!˕W:9A" d#uRF7Ace7Of2uGXaWr,zEJ!ZЪJ,#Y E(#~eUؿL9XI:}"F<)||5[`qy[%vvwYnY]t?Ϳvf\jN$1?4>vcjSvUa~J~j) xǡհe vh7aɒiO"}$u.ںY(×$-IM݅ʋͯ`.yGZV,FmToF/O7նñXS}͒+2/%nw"%nٻ+2Rr<_ݻ6]fl|"#r:[[-,Ӷ%e6{ke[E;ii?>|82ɼ2Vܴ:e߻C#hɽ-bW6R3- {{o5lY|T&hp3 !bĕs~o KnZ-7pX4,?\[iod @ژ^Syٷ3@7L/>-dyo=o7=SwUW1GV19HtLzx<">VZl0)f`rdK\e_H)C34vJѐo[ɟp,2.pY㺄掠GB̸EqQ\2&Śt C o,At^2Qڬo'Fd2{x ma庝(ehwJ<4: V9ܷ 5]ɕR"+GdW*I ?bDRJ_t+ꦢ]થ$(N7BF(t\ '[(WWWmH Ԑ[cjM;qcPQI3YZ_0π*gOpjB yg_e&A&cQ R\$!e_d"'0 gW"{+ې+0[jɽk28.o~P+T=;lz) ~D%.xM͋׼? >Ⱦwwu?w-8]Zd.?Ke< `zY!W_Yxa8z[w SKk0;Ք"Q&o< \_<Ȟm:.`UtroImiS[<_eJδz,mgj`F/P>^?QMuuŃykD\5|fVi׷:&"g$t6<]XDK6Yq,Ѽ/T2֕>ɀV3eUfѰ]p|I{?\~(߂N؉NCGM#؃Ø2Cȍ'kh[[qYƐ٢jO_4wvwVpx˫3;)]V:CAz/l(r2v/7fm=24ۊ@C,Sڰ#⣿|vqƛ-^Zk2~5Z+0}VH0sq:7BwmNJka:h9 V\!۰srbbg; Zf^Sy|bv/=aZ:m2ӸWՉb{D#T˟0P%[2@L^usBA.zn"Rᑵmn-A=jgj }p铯՗Mf(ndBNR2l|IJaKǧךkԜ%RjqAkCXhH>bDW+2ёaT񶴛P $u5sJ>Ǽ;dfEԒjVX pytkcCP.,OJgcz^*rb9eDBZs9aJ ͊tc V3o ԬC`v3qgNGo=df_{̵| %2Ek9'}$$K ie^f<`O P ~1$318j ǐIPY ȁ3u (RW7EE`筍E!$2 =bz0LdY.ALTYQ8xR` 3J<DMFW_ V`+GS2|ol 1 J8 92l\'j zYہKa/u~2T8$n+`T]ZJp5+(PX}c2ʔ &!y!X}+!1*(oHr;!Ob=`Ɓ@Uo'3,6Dx-6g023!N:=<[QdcVq|cPQQL"pi_!L 9[tFJ;~&2]IoS7>Bz00V8&&P7|^ (9( }#"Lش`yE} l*g0LtXEX~@p^& dKineq ,p zESV_ ] SȖ+޼ZH!uo^I*(7E[$d=-`6`=n"XQܾϹV1Vd`l=Ig@\@#vZlL1] pwNa>62SMTal`9+Znb Zx2LwHx8AbC Tqfu WsU`E Y K!CPowX4ƒJG ù:3sk8BzW;BJ,"8mJJ}Fi0HX^<myp@1TBHM<s<  CөHqF$DiZm2 ~Qs0 ;›H%sc< S*Os4'hgC2qa%,/ s&aҒEqӘ8C(8K`Y蘿m`JsHdXzs53µ_ᮇKb`#^8<+QB8t0tȔ1lzizKUv\ MCdbdPw P\7YHMq& F `{,jo-=XZ +Apn Y࣡Alg,@Y~ٵ!5wR ôKTAN%F@.a4GPjFc\}rr4LKPvοT!ͥDq&Hxo#)8Fi'UL{T ӈO7:L/t) [^,bbgߏ=<9 F멯C/hh7c룧򱁛I!'NB=8I=Nqb{'8lj=Nqb{'8lj=Nqb{'8lj=Nqb{'8lj=Nqb{'8lj=Nqb{_Y81'9v0co?EYH0/aST.C0Q<]3)"Ca q5oٻ~ 1G)-%uvw8xwN^C)^$*r"!'cC)qfxPz܌J&$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3$7Cr3++7%Y׈/>ke!_d+Oaϐ|-0h"rAHʣaX< ("me{e<`.J>"[.ȔȎ;! Yk}() hVbxIVÓĵ^d6\}9Ǥk<1Y/60ѵ% n^oTS{8.i=Fx c5R uZ*j#2֫nJ\=Cs$Wٰ uy2*HY׽DAGFg*@#="8nKco?E lKѰ)^,^Yzr&p#?Νfx^@q16> KŸ׮ 5ڧ4^Et`:W۟_hyT0".ҙUuXTUTC4'|)ej%JP-?nWfv/k6lptGF6x)eBdK8"L( MD)uh)cE#2 (!|1>J u1Wk*&s ͕ut ژWfX%_孀Rr2W2*gէ/cɤx孅bpxZsL>Z ^4*%sЪOdF Y\Zīnʩcdb2(\\KKCd)K:̘ pEVsflU@ɻzJR:+֢>`u1W^ys%4Nb2W\T\t$Jfq& hʛrjq1Wi\&\9Xj])_usPfs0WUěold̴f2QJw b/%aU=wU ?]J]luOOm6Ӥ|[{;A!?^ m.?.[0??ߡ]4ߵdGgKj }whɖ!ۣC=ǽ1wf/4NڃAΙ_)_&n3^7w[;懍_+(}3)E"/En$Z#g*#fi^h=kdާH[7TwY ̹p?lUr,A^h8ov[;[[VЗ? N/U dB} o%x!mb猳 VpL<Ή -%.L5Lf%?HDP1\=;Xv'ׇwcN;ߛ{}zwZ cf;L 膓{E17i5˾ժK-v.q_:om;|SEuZ9I6f{I9F3i̱&b1zgSkk76߼?9hiVQFpǼ_;Nk9x{xY;67SP ld,7 mc#1p͜ƒ*bpƗ:3U؋eυ:|peH /\<)PJqK¬I#b"d<( Oiel5s"OE&W#[uNsy0""%4ɵV"ww~7>LN贽r]w\?G ?'kwAy{}eƓ[~4ryh _'Jʟ_-Y_&x\lOëڃE3?yܞ7..o!H[o'm\|Iq#:Yڍ '[{100yRy"|qJ'L}-MTG&7~}+Z?/tSWqvѓߙN'7~ow4ӗ {}w+NNȚ6#SʲO5R b$U.ll$Ѱ8^gr,L|"FHXFW&B.>,'zcvfFEIGZ-a,p\^Jx *^,ԴruTKy ;؝N#d8AC=ۮ?uh3(w4C<{˜ٻm-W8N d/ʹi\7{gdrUӒ*JNLQDVL+$9y࠳MNJ;.@LdG O7}4]؟LT(`7bmK`cPA*t0<o<LNWlMn~{vPsUd|ko<'O"/'>;9le6+ e$P${:9*s3Ԣa"?۴ ZP%6דdCk|)# |[1sݮnQ:])*=g;@\킎MNzU.Ovݨ5g7gיAg۪\ӗsa\A_}eQv#:۾[ \uܝx{ɺxnz1Wh n}HD6$魨gkY,MTDYY˄]m/ a7_嗓4@({Z_oP꘭5mn( yY~ SAU"(zpU>lUo܃?$5p?}~EF0w0WM/:f^סM x|oz0wkR̓yԙogf-z*d}FqGYTv_'4h 4fBg&g#.g_EoF@7O[Lou(ўDD H,\o:dc0QؗνX4 =9fžP'{oH#Zc"nkŘǢ@@C\֩;'}wrwD;NQ|tˆp}?7ord0>"]`VG'6g/6_}FЦ!\tE:8.'9Woi` )7ESb¢m`Q1JЧ< xʊZ+Y4UAU!GD+#7Oɕ+ixsh%ԶnM٠ky}{4\rdhM.t¹:ϟ.^Lbq}yb sJF3Xʉ 6+Xq0J5IOjdkrSWҿ{a%b.ߞ|~Ix]dҋR&!F' 4>5<5YӚIybԎӖ2_ "}`Ҭ/C"%ZVV崅Š_KMwvĵ2h2}@5OW̚7gwwLJ* *r^Du# sÚjH N 3\!m/ !yj J2],S ޜ#ɷ ga'}j:VX)36I83I)J7[Juű/"g=ƨR]?Ϧ'Wpgi&@bM'Dbҙ>[]%'d#<*kr]ewRˁ.NP|H̜mtNy/k1sWNg6ySY qAüyeSYN^wf=)w^5 ~+Nt8z ӫӚo"5Z"TUdBHiAռ߉9Hy[ Z6䮲GflI&KK!aN 6M8IO~V ;bR@2rMȀ9ݳ.qخ)kNc-}E(ݻ|ڨP 2#Á5۶Xαv3D{~(fk[P ˲F9%Qf>~1+ԅ;dHvs2( :d|9fű꺖KͯsݝЎYbtH\HS}-N:=庣w 'j)ҁ]]BRv7|]sfN^D4+,||e[NYW>ݕ$3NRCXʩ)w̼4;4? }g^%M{q1Y`cU=jI`Qk1091FҤ m\$eytWd攢{'GO\WTr `YBz!h])%(FJjߣaAUDK@ؿeO?[VѼPIKETS ~c7ƀҗ$Lҭ={jǓ4@1wq/8Jw精;֪/cj.Y=:KgRܠSѕz=R7[ ]Ef2n+aAB7'S?]DPD12HL?w/uk m~\w=) S.Qv=7%0nHQ< |1BMcdw\fF=$릂*X*[5 +L2ͶB@c/$MrLjjpy'q&C PbAn ? bޭt:Q,'"V`Q FdLwJwbnuNX!/A4o-Eg7rZ,n ׍pra4W9o-*o^%^hX3;ɨf 8?ȳMYF-r.G̎܂ .: n@l|ɗ>l⤯U!G8•H çʕ4lձ]Աi׸xU&Fr :aÄ0H(tƌI?0i )*Oza*R?:׎:<N:n#y[95ԧX6UJ `5zЪ.t 1iƆ 1v1nN|41 E JM"$9e4`'^{y9t5n\t~x'xu lt4f)L +3S=a}{>ǃDϒneO0 HMQ_4X i0 Un$!1RQ,}aHXG1E9=0 s!"4aD# < b4t=H>-6)+XIHjef}voI7{4V`-Gbq<Ia>ei_ $wnwCMڑ%E-9bwKdmq0c%zzXU$EN=6^ύnZ0z3*Q,#t$9Kn:㭊Kw`XRNGc.ԽClhiXڹEy>z%i?3g̴3~fLi?3g̴3{y1}`ܣ KHo6-ERqEi`[) FJ`$) FJ`q/zp)%H_J`$) FJ`$gQpkcaٗ 08 h\eYJL&0"5ϓ֛B Azg\mM6իM (qqKd/.3X ?6{YA>wX+qș9^RyzJnG1&\I_sC[?ntPPdF}!7-xRas2/QL+2[Rzo<UX{[͝Vc<&幖ӐchnDvdE6͘.΍,k#vX̎ :\-_P]Y>[/W]OOr&isH ~eS3gU\ B4v.?Ϊ:]vhB Zf6v>[ z^i_-22DeAW׫~\c~>cۣzsZ,mb+u:)R!1ןe| >btѬqcb pWa"ɧ8ceS{ u896TL~~;ဧϢem~2E Y*Nqr/x)RLݕ) gGW֔+?Y85>V!vk?ݯҏ `iK_&&5IgjJ^.[U+2QAZ _hRk)Ԧb_A+'ʤ7 NW+2Ǯ3ޙ)GH8Ú/`bv~RE`v$,7 mE Lu;&+S UǼYyTwoZI^F3e/rveo QʧXJ+z2g 0Σ,Yј߶QOErÜQf YcI|!to^b߬e: )'ax>1eB)7js82H"$A[R.k%#RHDIF MA{=+߷vѭt>ZlkA+oa:&v!t[m:QGAwLWw=2֗{D+H׃)~AwT=+՟"\BW]+@ɑNtutE1'[:H]N0N~_DG0 Ͼ'KIطc앞+|_YW0GU{tѪ;#JR-$]`yc+@P*$ҕb>9X#\g;er&zL`' ܟU^DW1шRDWotéWiفq]Wyg8]Ru-v+SeGt+UD+P*$h#UP_*tQDWHWKH*f7trh%:]E*%gXOW٫|W hU8L'@Dh &wwq~kwqYk*# 0VGb}/.\"r pˣgҨ_ 8v!T7ktbXϗ,Cf~cQ*? L,~%{i[r\ K H{rV%EI(ߙdƀ-zث{ :-?EQvSCe.YWhc"Rr@#2k#vE%͢wD4˝0mLҕ 6K- 6LsѮ~ɩP4_П nDIg>]z \RV[L2$Y_5uͥEfQqB::zZM[~ߕY4A_%dDD*Z}%^F+1nK76߬>hT+{Xy'> LvѳmO_._XGj)"2A?ϛe'1DuGg=tVd]V?dY8KRv,jtK < S1\4Dk )Sqo$EpI .#ʲQ6pL3;hlW:_\\-S X+Pf'd)V Nj@30A˖ޖ&D]1zDq#wQD!?m`0"UX._G~߉~".Rj,/$g reG8߉ !p\GrVhhispdC6˥KZՓc]8Ul5FCAT reEz*+FqӇa)k4 S7٦P<[fB ۶G(u0T׊rrN˴ə( .t^+D+a ">V;h 6ȴu{\ž8{*`& tx;"[릩jn2L40Ml F5UOuy6ٖM@k}NOɳo?{tW IT oa7 (&/L()| '("MP'@>/`s]`[o>]^>= 93+x#Zͷt~ڹz yQ %# )toq'=#oGA)D?M2psX}zI9HEBov€|CxJ(Ώ|Cr]t~@t$Dt 72hL3~u& xNP »h$M|# Ӽ LhhɁ5߿1d7 ']wCx4?.-.C>r|.@ܶM3 4{j%? ؃]GKڷ4QE%g2*%?,(dpGrgW_)~JA J 6va"O^geʎ`h}h"9}? k`n}~H!w`?l]7NIhE7"45?\^ e%Dd]V] L{|TW!{s(o3hI*WJڛ-uJn=XǶ1%czMKȻU"$g!׆~=%0b\j3 2h0/hSFA@XANb^Y;lPy7\[q-:X`*t} :5" Y3)CJ\i ) <3A('jGg4G65p[Q\sAQrh9- 4dvCIƜI|1 `"M&xRva BεE$ 4ۂ;wGpR\lI|]Wx;MQ8M-uOK#_,"p8-E 粻B XDT?E[RY>,89@B0NX >Bg\cIֶ=/SU? e:0G,V9~:)"y!2t}C]`CXp.' VKCƻ|IEOY+h.A1&[xi%ͺyj:Y\uGIkGQjX]MFW׾8nT7J@fow9?7_.'D[+}ki(u-BGu5mH%'ɱޥQPv5͝1Giy,-< eӻYx <~;9v`r[O ^X~{IR #oV~rRZ%Ml 9%$r?O>M4/<42` ?M2f#USNccYOU axTTLaa~vlܒvTÎp;29\jH뺡>,i;8k*VgX1Z8b5E\ZN" lel|Hw ùumɼ8RW=qڶ `Dz0p"u`O?FuچsD.aTBtsSkT#vuH.X_[]͉e]u5'׽6xNlݯsm|X~+ήʺO UV"jF]*Bl5+RRWQ]injRW?u:CQWEwXjzJ rnEoh8$IF ^wXʹ,gr+NO=K"%X,p}۲ +q|-I>n]fv Yx]'xm"Vݩ//ʔ;SY+(Cx'xN.c[u`T0?Eƃc8}86udaU=Gtus4=G, '{|j0Qj#aY6`WV*%J_= ,W1 SWp)l|N5+YBArlY^"Řj<DJhL0v 3ėt/R>|ub̢//#-0uOpVxj'UD;ZxDv_M>Y$ \SUp4ۦmzg[* 4 Tr< TruWgmr9IYcR8#*B4 grcs<]7 t|Uߥ1L_d K9>Y7ƵŬfrCY|BkA1^7)_1"ꥵjթы C81 &E.gYCj+㣛A6(9^,cQ:Suf6&34nQ ^2u` um2|m[%90xס@3MXq%jwh\8 zv`;q \6H` M۰%^ 7TI 9,: E} }'eR/r^׿]2EoUq—]{35um˙C$ ĎSBRiEܖ<rt%C('m 0yp&0M O+M^k.fpMƹn.ĸEQ둻'zYC9j<%cZz.>+ ܤ,=k?6ߊ_;a}պtU#)k5-Lyse:qtt+f.Kr6P&u&<|dS:u)\tA- ¿K1k=Oʃ8g7CxsEd'c)&*AVW#Ai.:^9{ qٓ8'[fxf%r&^} <;IDv[_f&}QZ {45Pj= zEc~ZS{mgWOk\_pȾOg7Bo8ԓ9'TRyɕѢA årD%&gEF+CRHamoTI5DWCRٺU`)@՛m!lY^;poWS؏]}LqY@-<ḁ#LJ Ew.HdLgaI_iAb/ D67l܃gc7:"إG#I\J /@$ @ gZ*D|rm&y;5&m; Le;Iس1%XPT. 6]J[XWKC vV“D!)O^(7wʣbgajѵANЩ B?ie 4||c8$0m 7|it%. w[J&-K0"&(1(u)3|pzmVW)0~>T76ZPbv5{oQAW㉯L-HT)0fØǘ%ͺ :|dj⁙bg'7 ^tcۑ[N- C6PPiCI ݯ^T 7Rn\Hi|^F1$Iew)/0$?cbR‹ w!>Əg(NJϯ:'ln*g{f^p]YXnONs*2b̛(VҌLTU!Ljh؎UKNPil4kU\'+PJ1oMlπ/S r'^Vu%ڭgGK7tΫ'/\J~gq!BnX? /INgaߥyv-}hGFM$apfhy8oеj25'8,_@| yPئ="B=ꖃl,c7]ZuYcuB>o|ib蒟h/ʨ_lQ+xk:~P6,0Nœ9QXD_I_~kNw`!DC7 [p zݩz,t0D0 Ju8>nWmzam*l]hx uvyŞo.w-2C7,q M˧ %ϸڔP۳.| USXE r16iHLcge|M*ݬC6`%;棚nrWiAYq(XmR}HyDI˲ k '(^  <".0VXF0Rݍ~T@pGMG5~R_G;ߨb+mtc66xˇa/i!E4C$DuQG#Ǣ{6®k"Bued6|l”dw1L#cZMӢ8HBX3yIIrYx^+jу)INHXo.h`tS|܍DfP/1r-HC}^(H =,[q#=T. kAb=CHMs7-S˱mWB=C/H`.tö@_u3XZOqOl)UIW<ݔm9Sͧm-7Hy;ېGj˱3).ᘃ ςw_NYrYK~aO<ڃ"G{{ 2WG&7kuسwSM:Hzu5znlXX5DZMl61Lj*8ʀ~]-@.  =ˋN&y8-F{3RNh0}oO*ooND,(xe$q4td74]t8HPv$îͤϽ5Fy?;_4G>긡@Cm}÷Fcl`m #Mer6j I46-,dQL@sO (1m&Z$:] WXd,= ˨g 0lϣ&,N]L,p9vA I:xi0k8j1J~Pq7z8 OU_( YMU-X{s.Q7b}Sg#=ZZ)QZhJ_ٜwRitG7r |w? EeR8?-y64̀{sAi&pSt|`mAMLLm؁߄gV3y &M]G:F *?sx4Xp@!mmW1pAi/YȐUaqM>eRĎy8Yˤ'ԐہSF?blavя,>nTy|YfΑGZm e:uA'6&sut΅YC&BL;4(PxoWA$"yYt]be QIX`$uhtN\ ہ BLLڂ"6,_W6W Kd.=K:BP!l+21Lsʘg" 9aAF=z\inP䢧Fx"4v Y촠 *<5uEa@[񪻽E8)OinNZ*geP&"c+^t@½PtPAi5~YegWs*D!=B($ʩujgMzGWTX汒Ax;h̋<ȖD?ys*/6 gl*,_qVD"C*/(\_EBwImSSa+2'k,G93Z8̖|ZlXGZ^st*b+XsX8;cqz3ϗ+Kjޕ*9|F\eot`Ti$éyaWynK_QR ![Cıb,&%0B!Msp3 aIa8OTD2n|(Yc0cⅤ/0 aLRz؁D<؉c OeNCٳhA9y{RBS{wLal:E 5a`7_P .viK Y>suq6>l}GT`ZEJ}l |?`LlƂܵ? m&5 ~*P] rׄF VW^.xlޤ<;}FQ: ~#2} RR>Mư\0mۄw;SLechYs a#;:wu] Lg^ZNw]WGj#]Ki!>4 i ~h:xuc"ll;$";#=GmKF?{y=1kWOm~ J,]+/Do`[H B.$]>@-v[qW F6&esˏ bj. sc&r40pҔ,KqD.62/ Ӛ `_ϾjS5k78uG7O~>L%\j~ٜy_iSEyFS&M-Ln/3iB Ҿ&mjGL=@ ViLQ|);8fډ4 LIX_ӣ_[-?H_-_VԾ=;~l+ }O?{>/GU,{i bB.|u|-ϛ_?םX' ?*y_|O%ƫUi_t8=ZԖ6A\k 4! G\9tEaF,Yfo3G}܍* wW/$deNC\LC)Z3ʠ*=Z>h!y֠ɴ K,Upp& ISk5@M5$V=qZV w!B`#I޼}uNv?>}.j{HMa9uK+4L=rZ8^surswP4߆Ik h\>:t&)؟(=:I.F7'Ik{_A37RVge&lz!ѻ@K_"-?AK5t w0ǝfaFweI3 ccrD4y)ZRnդ͖0Yzv8oUS%IGϕ)޾ڃ\>WE&?yԪAc虺4"a Ogyhcf:/y2nq?TS٫#I]|gC㹋t;2KДe-Hv˳yKt| uiHqgM]v'' Uq7rjs{{ꏴ/fToW_.rU߬ji5]ZBfCs,c s1T$^n |].f*2{!̳ZLWF_{U!P^ڙ*bk__0nrEhӾ;?YȊ>!7@HE[mafW¹gΕs2%svzŶdu`K[Ri^e^yF%g~]m.҇E|rADW_d#Xs: }F*]&CyGa/iҳggup4RJml˳&\O.L U@Dr^N.d(@߉Ղ}4H(h]hSsiȜn$goDPs%<<jGe{87d^ozqz+ MWyo5dt}{ww{뚵Vy ;U:ԽZsꓙe'rf/v{m, 4/ossm3uLsm=l5~ )S$K2变ncK$>x3qOvww>{QJK_1upzIOV+tVzc;٭e>g Q^MI6'teCِ%6m? Q윚-_}EJofrAb7zݭ3w?wF9rϲ9/ʅ ߾yukC٭[EhlsUDՎURٲ"ײ6{<Юkv򚝼G`|Q3{"Syv-mZA÷"7Wlg.֋xګBp7h 8c#A=)4 E#$99pyX"yd8iɞ`,o(#bW` r&䱰+zAIYîv\zwF>]-00]--'veJK+ܰU#\WV|=HF/aKw{~Fqzņ췽zrw (q;lT񚓺,2Gsm2٫:oJ_ݏާnu~9%}`O|J~_nZQZ}zʼnkO:~F>~^分3)r,>]z{1eyVWm2TWُ6!IX~2Tv~9~Jp"&`q XJY>e]\khiӇ-Wh4M404&ҤFV$e[f9c&Hc8%J GZRGŵk}w5 (LoλV:gYu؎}0MVVbob~7\Ǽ4tݒ T98pR5dp{''n~ Za;] fж>eL0P:m'p)wN ?ZV"w`W*-=p}Hk,7Zʼn(bo0U+?Vk0H4FØNuF7F5gŷP@qgbgǗȒ*Ɖ1bTK̘Ģ4MԦXZ%*Hq0LscWM+moߧ3 ~Cׅ_ʗ_9?.ˁްzgN3Zڏߵf7t:j3BZ.U;/N@.s||zs+q+J 灷܌cYt~CDž?Kw+ˢMYl:+k~Pmҳ>IbxgܪځWk!.(!b͆LsPv 2'bߝ cbu(h_~M6rǣɺ/׌dž!p9MlLsݧH6LJ B# sJ:xk" 92i2%*E(q|r9S$8%%jcq<`l@#e" R8 4b֬@LACbARHmuP5oNCc }E4!.(&b' LSuRcfXPk ,aJ&J:hlJ['=j [QxQxW^fڗ  pGJ86In?BX##$Z&[A{Zw*kڑ] ه0Z3N81sSkt@A;M8f1QD %IbL)ⱊ"; h'!c8Pn.!4b 1REUq!Q"K lե{g}7U*ڨ턆6J_3G)mJFII[ R|7]*o/vu+'u:3e<)Q00 DvdDs$qS@dEDIlD$AIS!x tӮJpaTjBl硐`[Dj&/0-1̺jO:.H i(r#>GPG[Du: "Xe7,ꈦq *#P-bqD,*ch^ߥoD"MqJiJƜ 7(LFB(68N!!z iV2ykI1R`rR(vG%YaIaQlP|XHtZ)dDӪ< &&7I*D x9/K9oi0kum.C?'%na/ ݻ /L 14۳UBP9MbV iќ,Ó~s$av',G~6{vӆ%|4Qn)_c#JpC)flj5`1y1ߣ:mwܭC9N}u}92à'7 p;[\-^1/yqoP%f "v0"Y^;`nC;Pxlb7e9̆ SN6M' {ߘ##ՉR#d4e$4n%tnzcmbQ/+1?6v{2˱}eOڋ[?̰9ZUMCI"xu6st hrdROWOR1S\Hs 5Jm,Nh)7̒Ā٫uQ7]=|yR;r (-\` oE2ԺHr%T-I$XƝZ;53-wF*~m81|E* <1]\8[)hUjw}\M&h/gĠfC{6A1J 3Wrs:vL$R20 ] ܁0&۔QSi&UChJJkT'(1Kl).'$(Ni1OHazTjܲ.|r1ONT*Tn7T<ڱz'ׅmMW0U%=.$Gz}Znl}u_W*R7}Nu_B)Yv[[I7*39;L4%$ 8dEJ 0"԰1՚c˦sتQ蝔O;#嫓t*u'􎎜t=PR>{v=T+>8QRdW^2}u|~ˆ)ƚ"ˌ8V"pI‰dȘW\XJRMף+ѧVkfzv$j1Ԧ%'/!L5t̓;/D> !IgN)V,=L/aM͓l\wS˻K792ls²n(t]=gj⎉bn iJ/*;NR"nn85I:x.FQ\rMNZ?vKҿ)hvLLXxf Q^*x뚋Py%Ħ̢A*҃Q3&hb $6umňѝr4)J:ؑ ,XwU*Qvh_j_̕2Զg9o8P*-*kvG59Nj>:*=0vZ^xe'zBu/,7/3<@96λz߽V߰m;kBqΘő?_mlmN9I !KB"rsJ7ZyBG;>w^ɡ;X10i 4 1bᅒ3mJND/s2gbZ8"fz<VŎn3twshR.~N5qBډ9qz;9H~w+G+i/Q]3Zeڨz~n`|3NK˗~Qpr , `O\Ɋ>uk7{ ^M]LIԧ`\t|14o]W;ʞ-~9||sw3p pYl^;};_h.Óvk.w.|~6`㛋ˍ <Ŝo;4}} 0׀n=TC qS%5Yz*q9,nOSpF<n ٽd!oG}j].ȧc nrKc~*Ftu򏦽 s JlqSdVl5?VY9[§E͕y5]WpQJ0WPffm qw^qᯯ<;$>χ{?nֺjg[~c3wƬF6-6B|z# $kGNt՘}Y>#'n^7 ևT^T?IaZc#y\:^fj᭽S[<@ʻ;&'˫a[T]|ꚴ/纓/~νx9j^s34ߞv:o{>zm &GkA;T79CxyGt)_ PA$Gzi#7/;y;t],+}#˿CoާpگNNRvYݓE;?׹+D`87bYkanFeK* twY;q%Q\u..H)!)ۊ+}K"i,[3 F@&Qdx}_]>v0p[&Z-ưeWWA%|.*Ot>n 6#F`i0(6=]n- ]Ϸ]V+sS+A@H<wnz%pʹO1W[S8Ϲp;&G3a*-yCQO;[XSzn ϊ:$i\mmE A m) "=qh vI/;QMt=sj v>˨c2GQ Wo_yׇN0mDQdF‹W/G*:25pW ;eχG+7%ìQ^?ӷ!m33vean}Y=ק.Z_GrEJ}q75(|Awv5kY,P'Dq$vS[ߺmyqn]\z~}7_..x7~^Za ;^ W1Ir^iN݉z[:;^vxٺjAukjJ 6ˆ,ِ8m7ߎN~1[f;>VE㙶͛6w|7Ok߅}򗼰`=>eu21Mz  ?:se\saܯ~~qyO{wWǸҷF1(4\h޵;XOمv'5#IDē3{@P8/wz,mNѻ`kk>auq^J]5SiD ‘C}[W??lgG⡿=;N8M Qc$DDJ!5 i'!>懝-Ʒפ#zxu2Q%oש!:J9ō8i"ŖPmfOCJfDдܝ7X7;:̤- }f ޫҢ߈MmfkͦJO:1^ SR )&WŬFE& C|;-5;-jO3weݢ{f+q=?)f#LPQ Djq7kGfYk\0H#t34*l3vڏ Ji0pnuq[l+yd8 :|L]OP=wPĜ &c.[i HqJ<EI Db1k o)G)`G;o>){_[}Κ ͈$GVݽB6r#S3W``}9:ɢ5N5\aB`hYg0 : ʺ=~]GQk sZD)3ll%1JYй W.:%JqTs+SuʰHY v8Z)RbD hJd#\$ǃ퇫[d>##!л^i(mCws%&}?' ʚcbR:yy:@k i?Uz!NPgޭl,G\D+k)K@ X MX5 gԉjgTRKhҝ)hzRM0h %Ƌb,sbίuD-Pig,Q*f [8SKC c@^$5 %MhBc@^k@gq{<_tARP,4|d"Uլ{GL8L>ܤOdM^ +طaY,\v24I2>ӻ0[ V)<~e)\ތ^ߦiuOY", /@B^2\}NJw%i q@Pk/$ ScIBB@{$MponZ4Ǒms9'y2-'{SPa/N]䗲3A坷;W^rʻ "{|ol /<M, :䴆ioms=KODm IRR1䝸A[cPYC:8%XLj^5c_IXGIǠ/ZyGK B($@eh Kا OzwX EY,Ҩl} .:oSc{ޏoz/DiSkOQ^sgѩئzل;A p[ꐩnf GmQ~V?k[_d: Q+jIkW̋IBF AtUTA;(< @/ȑ~'U~b]:2xԴS)_}n3r%+rxZڮeMKUL5gjFGM۟^VJlgf̖kf=JUhf{㞨#SB I5 u8f"іXvkb18!No!beܻ*"l@p3@A 5^3w1YnBȗKeY6uă7ur;*p$1 $ۦΚq;zC;;;gY(ß.Y+7bm+E-<1&0X0 i=.7o37xlf۫$\U6F>>5IiU"y$B (]J~PZRz _㼄Y<根njyidr`Kpi>#f,JD]k& pM7}ؼRjk1VL)j+F):& EA_L1Njjd(@jGTu+¥{LJ(0qZFڣ{xµcqhJTd7])gZD9i(\KiI vvFq1QvJK?\_1_xpk}ץ'bYz4 t&]>UYlf^I2( j*>iRk)"b럛A;3bYIc@l{ZE%3ʆ[56tC`~ T`< F?7&T xOX/ d*Q€|IŚZ .^b97q~){8Dh޸ϨHqip"$vJ2 ~/R :n6)`͍R7Z̩Uј!JC!~m41ц gL6ΓݹQV_ьh'J_G_}j^֊1W=MfcImj:E14"&In`w &b>OHyu [cR;l4aS #?gNR MsK1*3y}EBtwÃ+.>#hg / Z RB'Basc̱x<t0N$)$f%N)bׄ`-j! iPZԳ޹eÜI>t T,G{QoƟϓGZӬh;fSPu՗u<)̧C#,:ӡBV\&f۝ڶ0f*wtQ7]|}&ѻY0=igЇ,uqwwʥ[j̓HEF 1ZleF9ͭQF'Q,Q3XJBԀ%:ti&F Q2/\Yc{w~?]7/2p:@B;+W+n$ӱP&s nT\0 %$4P>v 7wnyXNJx˅7swӺ+x/WppvFcԙ4e=o7%Kq̤KPc)1/FDXF3[J!ukjIijT%S-nx/X*V$h9dr~Ljv5VnHqO9NG`iEZ0*L#)🳭#ٮ'~V Js'D!)#i[E30624I-FHȈ\k. q@s3YFݱ2}f <|If^rzw^ЖbHb_`Uili }{ }|g*M7@!wV;s/.K(t玛;bifBrEo]EU$݅(sq+.^y>ܲ܅Aw\M Hz%YV̺es103ģȤX,G$zT~K"?h%{M׵#aHjhx(̱)vXlKKB9Ua8)[G?;3p5W sy R+ԷD(qEȇ{J Yeq#$aFXe0$2ڵvMM.9WKW-~!4nsZȯQ4a[&&F  KcaNӎiX +Uplye# KFc1 lLCVG0ւBȂޭٜVZtL+ T:X2xE*XRs0Ifu؜VZMk5mh ۀRԸ [_.\q7euJf3S׿:t'GBBFP`D$4ە/zW ZGM&' hAV3^hfyo<;$C[)9S-~H ȲsܪY椻rCᠴlcLIcK3yKr;K(F6Ԍ PcVऽ[w^[wSJ ޔsL(Rm\#ǔN KcdqR&$$O&D%vF_cXp1  ^vl~ x#1o5m9۶$L"5-S}]? LI')!d-_~ɳhibւ0@9zZaCz(eȬZg! K8gl?Όܲ- HLHc)<n:mrϛv\c-ƉJ`zcpտraPvDaDWTm;]y6_3Iۉes8\`@su^VŸj \x=pS:6o= S7 Ě*sx2OP|(HSVgW=yv3oŌtJ')y6ųn^:wj]ȹ`1fY8Z7$3n1AP#,c`R'(Yo1Il crG`JMhzJ=o|뷎%6`Xw$dšV)bӄ~sD0@Svퟟ{ V]p /^ýlCbSJ kE U02%N^  WNk-~f5I$C9*dPH%9L *Jɖ_]yA}f1sIUi82MS ~?\Gųvti@y(Uo:xx*)1sP^ 2NPu<.^sU|f-j6eA`= ;; G{$@$4?wσgXI͐Ǹ`8;UQn~PNUmZnZ]VhBE+l)5 ?EdA9XUzg=<\֘gVm}(Uk?gMAt,i*jھGKӕGIyKW ]%!NW+fL<2]#tZHkܶN]Q*X 3*CWM+Vm+@ɱn aS 1tҭ+R ]Q +Ys)tjt(Ik]=IsdpuζӕKUSʣbʣ_ttsJ}tl®<\IBW^݁`ޯ^v!v qݣ/{7Sh+2J:0v"tVa.TK;G;oq5x ߓC8@d]z\[/i޿m6d 2! 󖱳0|qR^ޠ{w&3wp_f4=v/3<}p8k}%~=q)uf1뗝he'e/u{+ytG,|P *^Wvsgd?햑.Pf}{e)a`N3O @0'7hfYN̒jt/dr>E`YeݗdY,gL'$^nۻҫL{j4yv!}vQ&ap `E#e?{'z y'?ͺ /kۍ={?M{5Z4 Ew(- _}1 '|-(F5=f/u''?z81Va8?11$<$ HG$R%`# qnB@/ !x6r7` %'U3^$br;3q(E4y 7zZ䩇ۜVo4d<}B!# 0- 6 JU" ?#,HɛFԞB)0 0B#ǍF٭Vpicv={QV{&05 q-[ZJ]itElz$C񭫕3Jp9"KW+j5jΔ芶tuߦRQ*DWX`pl ]ZɶTt銀6q q.pj ]%NW%-]=AQDW1pj ]yJo;]JmgZzbq,DWX p k ]y|ëAC:-\7p_{p?\&Pk~([`VGr!?YÝ~R}|{G's^Y~r]%ϔ}ȃW,WZ B[;?O^;|ɇ#(ɻHoj?~[>#Ҋ.׿:,C[upj~ fGoʐdx7{I w'cƥy`y,Wf*f* nJCH9vpq\_c[p3|oF g;y6/ hYW^̵-Ԣ;cZH~Z 0|*4ɽ 0U:r".hlUlτ7 +d'M '3@>\ˋi{b ʛlv 0zx{r`1mD|chOz<#K3 (,(j2 rn݂TtPjVLX/ m9R,D fLZ$f*M5 C)4uay7 8w d|?I4pvn$\?7uQl7wmb75~Tfa3rbԐhY#Q--C"͙UNL׼?_jk7vpxG]yç~bwXtG7i!qt:ݎHƹݿw0З<~LlGGO'P~`edki**_)m aH&90^Caī3a&/\q>w]0Ԓ]0A &-Pb6&*lM~\:DprU0`o7ۯ~5t+ZqiWHU\~%Rqt]і;%)vrcu+URwbZoҕRkAtA+KNW2-]=@Ҋ$ЕVFh)thWWoHWb#ҮךM+FコboҕRXAt6'dVn ]1Z fKW6!VGWm ]1\қBWֆNW=vcKWBWNKtqk`k!X5E_R|Q:0NVGJ".h]U@lpx?Ʊ օ?]w*>}7[Thכ=I٦tǣ^v{w pK,W.W} !jF|]u/M<ݟL|Gw}otO3|:(Y;wbZ^}]7pW2wMqM/Oq9yQAăd&FI0@yWo>F*gՋգx}Y{6?{xYg߼xο_=}^ݏ+:s VK{['?^v>K=D*|||_hEdQ3E,l'TahMpe~; [.hO)ԩfFI 5y804<vo[Nt~ m~|OH3:}:*U Ns}][R~ptcG//WЬ`]3|֑;~cg I-7^2.H{@0-̩M':+8E6S W'~ߙ?דȚ#S6zNFw# MoM*2Nx)_9Ȕz`mUttٖSg;kr\|)xg] zpy3Zs0JM-8teVz+xsKs*{_ .;.+s%>_ h]-]]w%w+nJWbfc ht(2-]=R|M bas pbwboҕ6VM+n ]1 6*9]1J&k;7][ztU&2:9 fS~usbn >z~fRw2,VZ`rͣQw;އϞxypǟynj{'Q, xaPFZeuo׌NVˤ~Ynh4Q=D>:^|AJ/[~յG\%`GwƲh謆%E2EH9b>~9Z^1ݞ`o>s&3^[_ _B,DnQDk8obvYxc*lY+O`ɐ>% V+s\ֆyd҉YUb9c$xNTelZ\ 2E%de7JN>iy>Q.u6eBHE$XHȔ'M*T6P( $kY=⢗cHJ4ap1(liWBc& K/XXJ!BQD \.%Ɇ0!^&| E2U}`!LNbFb[QPQB m>p h-жaS+Ϸ' jJ"Z Tld %A2Ɛ'-s#a&t9#vul CKjH,8R`{JC.((JGآʹ, R ޭl8՛LBWJcj$ @F*x0 f -IuYȠ(P_*'L+|yXqVLzJ! ^5(!Ȯ@$ !6CA!5ad\Ɨxk<(\'X !@Y`P&9ih$mfLV"˄8}n*!oMI9$h/X,xĮ0LC1eW&S%sL ufS."K@` v75D&/rR AEMgdD]z")8ZȒp +Ex@I~U* vJ StٕM K]FkX5VQA}Ysd[BT} T%HH/%i sD4 8 6B jel A$z a!ی꼵 F Lo AbRYड8cL:YD"N`""`FYj <: ÓB\ oOr*hږ ;[.0As^B4Z$L>FjAuP~LJ_o8@>+$]45-*d`ffSx`g\?X|Z"N k A-)BJ$RFVWex'E`8҅~0G>S&H.: _R˨ՙ7C6D m*6]Y[PJT٫ITR<&K΄fNU2}hr˟GUD|ҷJ>> ]{ #Bxu)]LAjy<IrC*rȻK\Ha(#TK;SA^桔\k0 6#2ZN:G!X;Gd]BxdWnXbalD0jhk rl 0IȒ,L\9v"(ZIbD" >QlCV@;Q9\-J^k30l" %M ҉llpU>v"" >7MJCBx!mv]bCO q18 fi5:@8/Sew@x'Q<(BQr:xlφ)?ZQC%LUI3KtwS}}:y\+[A .CVg}6ߞ͇* !\Ms77ɟ烮IGԢjS=NocR*;]q2uLrLr|\iwO48rZܳNaG98IIqD3PoWX :(W:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD7KQ>]VV!q QeuN$G7HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$YPT>uDRHٛ'ꀔB#Q=u"Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuf:DQrHɛ'ꀔV Q=u @XD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAΛ!<$hic{=-}w`bٯ E+*' mxE,WZyEFKjWxE^sJĝGe9i}и:V/eAǫ:AGuJ+7??} ތGx3H/DxN?0+_a0#0CshY Tp՗kHӀTɲYE5\vhwY^7s>5u(|4Ki6 c,j4Rן>ȜD9T%*2Wb+ʬde,!Fƹdaè&V NbeU0Ae)Y2,FkI1.0K\ |,\()D4E\iϔJi1y9@`pнqWE\%⮊uwRB+kdGleohoUV"]^QB@WOؼ~vw4qw$]=M%}J=])tWzF8'Glxo+싻*rURzŭ푻*ޠ+V2Ur)dEw~ܕ0}rW AWE\-@ZXpwRZn]Cw%`D] 鍻**z]7^wWEJa]CwU>]uUо"K;H)wcu5c~U_Em;vӹPWFͩ|qrڶ8{ah֕w*/~2F ت3LQ7':A%oPIy:;]=/}5wΘJ"+-WrIRIϻۍ}oi[_ i7w/CP t< X_lH'hv8?KKQ_  ګ=ݜ%PL70Ă,уoDřwvU0=:|Mp5ioˏg6\+jkR0$\tgo[ ?Kb~&PK0yS}a v #-a'Qb$I}@.)5)pYk& @B8K6sRd^$h#àŇ4Y'q6#sw^Yût b}U[t_WUYRGجۖ3<"H\Utx$vRխgޭ|Lix⚔VU;HT@_'cbaOv#:|١3ϐopDOkk j](9ܾ,t[U[#=DB'!oVgiP9 g|طm>6_M\7 v!Vjz֕k/]O/CȁC\k-ufhQYd< kwz݌R6fTZvD9KJ6[ RRR{ͭ:6Ƃ6&c`>,Z:?._\QF EJfss00)"&@A2R3Ͳcx0ѷ2eyn p/w  >+<X2ǮNk:Dk|{ "sb:ei,axќI0Δ,7OR0;9m`hV;ƃK:hQKYjY V*Bڈ~@s261L_`64(^J|?D+CDrJQ4+Y1yNQCAΫ2< < 4 ohw@8G!inƴw/^6}==MVwC.kWx6E^/}>qJZ`>~dcݴ7;ÜьG5;%jŔ$TV0Q5T$lfd3͝I{ĝ`DՊi\J8.pR$YJ_x6 J Sk y7J#q/dwYP͛$ym@67s9}Øa bؒKF71KUĬ/+AeNLʨ*p*X N?ש%:yWkÒ:%b6>罥B$2Lye,wVz=S;u3]w3p M:v,YXK\g6,<ߡE54]TE8PgA)g3U/m~).=Yȭ5pEMRZq4G ɪtUWz{НuH2CoͶB(YAjW{|Wgbwyv}(͟n޵yүG*ym,\VTsyYrӽݕy`_a0~Zg-=m)@|} .XFeqЍZݥi >$RYJh*9k!s u_\w1a:/@dc4yࡇT{Lu7sr{|ƻ^.}k{0.8х%?H|=mTN~\hˈU*lCp_19hHƒJk%7FPeZ1\^+p[zZ^u);ά1fIIESf F꘭gѥhrNR)\?Y(Z׎ֵ>*G7K:6al}w*7oFf>;ex aA.2½gJx%f6 IXC$hsp/D?daY*/˔a2PsI1Db`#YFD'jdcPBPڬ4Q !I%E5O&Lh4ܷr۰>k {8^l/V2nW}-n=Nz}wd+)$>r&FmaKޟf'M{?Sa`Ўd|PcS|lZ<[a2༲{J0LkµkEIff"'x)3%--HV&f*B.IK2F":Npolg<24GKfqyM`Q({MrtY|o{w$P&aݎŖnHn4>=0,`YT9N- Y:"Z0A$g8+KE-|.ШJ^"i{db` 4]Cy/(`Ɂ!$esأ&{= {"pH PXR,5aF5nYiypU%rMI;8I9Mxrw*t!S]nOfbL̉|D SֳQ /DKrt甇#4n))GidٳFjSIޥrR@Iŕ: b#@ݯ5vtֵ֦,sZܖh;OqrUe(f4[jӴZvH  :\ZhoQvT,vĞ,i=\^v!| M`Fa6\5v&Z$x"^9.' ,r2]m]`JZL$SydaRBlMr0Lb68.ȂO>w m }1bZXl{v>]SuWU78-=M2vE]e[B !}{f V`pn+ݲ#ptSK7=x"Hw}3{=GR+ u'dzVGW"s)++sQ>tۿ+r_cZuDQ"I]٘!CND`rA{,70-ߵ*ap 3kwɤtv놝~z|Qc(S[ ۟M=&޺.;ʮ.({ e%ߔ|$N bn/dAg4U;]+ه&X szS@GYQ2:-(pR@l;Fk Fv@7e<=Sb\YJ~f,!\Bn| /!WH{~RcDDW_. w05uÓ_`:1\HF1JUVztvɷh v1eUӟU:|nb+`j ra>]I-ħp D\߱ ];(E4}S m젉ϳN.SL{ ;bm=Z֧ij&ne:jśPiN6J8#8.rB n' XwLFXґmp3-B)-YcgЭL<͡<?>g/њ0EdǁݕY<鑒[a/iQr.8sb[:")n7uA_77XMq:Y_c;iٶCqx PBaLjcXz幞aVi,iČY1UM <|g`} haV^j:(oPtm:^sōPpNe `"" cJD2n.dX-sUH),D a:JG4evREAYsF=覷vYۢORAa!zݵy[Թim!3M*lr3y!%ɚէs$3ˈّ$0F-a eg!$;;U7_0%t U%PQvүە7aZ=hpR%VEQ?3AՋ vbgnN6?5X%=k{3cWK$@(<5DߠFq:Q[!2ҒojG"&FSC2ҎD귩No%=FP{|&]DٖަF|Xɝ-{B4ْVgC} ;8)[گfSKHrbDԡXEBKe*yؤ+2|хѴҮ%(6\TRj)vjE^Jy $^,5+&<3ݙQd+d2ˉ̌5>Y.@g8^wvzGVj}>N%̝Wo MgZfw4v>va v|y16ȼ%S߮¾f˅tص?o$P IًnUxۂ? SςEg)!-[Ϯʢ#^$\\ޟ{[/6y_[խ_.:~W7yM!ec9DZuM7 YZt`gYkϲbD_(@/Gd%"OrP1TW3E!M\Β C/*纗@((z_/&CڳnQ`hđ[8no6jp~G𵑴UO ivUB z@ƯΠ]_7Za4.=Kl\߇ʌ͇*lWynDK9Mµ]&7d^g?+ck i?Iɉ(l mwȹikn-#P\ Jk,.@5Oxo6b SI|uU^:(vTS›c(^?}$'69K*1 HE./ϣJ|ʦvԖq.i#uQ_EmEەp|7|Whg 61vcI@ýZ"2y"gTdmx(VK82;YKps ??7@ k^>֦bY(я3FZ:VٱjuJϵȞ唋.2,r}Lz7)]*c]Jo<J?[x遦J}s%0mDdTR:xGH]F&=t].۟vvC%M,Eihn;Nj7,->ڗF{>ދFmE-16&ri16Ruδ#|R+l{nƓݮzIqY*TҮaZs=[}~X}~#]X+c 6EV@\D'2,>14%Qwk *]!6$ TOѵvA,ȥr_ m z]%CttB N)U&6V^ E*U04lE|Ozʯ ppI-7P,:Գ!;,N…܁X|0oӁUF `Vu4UvVzu-C$g:a'`;obX^Meo0\m>@Eەg7w)e l#@T_ኳbٛ?{UkNk'ͣ^ysXK7uLc9h8rH:\Q=^߱1wV {[Vti3 畼.w_΀ e =7,KX,Υ`܃YW|d+Y0'"E.S\v!(*^|蟻>ݨ]/D [Tdžu$S"kGh h'xٙcRѣyFJuvSwrx^WY->{ c,t^((xves8'8oh9@KԊZݚAEyzNl#_m=!fP31O9B{؄8荒4;u`@[Rӵ#plgǔ 3@Dȳl.u8<L@j#m. raokyR?k֚{ig(e[XսV>̬_pxjwNxġ u sœ^&YEëz) Nw 5o=&{u u5 /.?5 X@S϶ua{gsDmY:9-km_yŏ? ? <# F0&P̔G dP80OAU+ M5ex%FFRim8/$M#ۭmR>Xˣt )bfz{?*%8vNjw~( oxӼXyܨϟÝ7ib [-QQ@iGA-0A\,r;t7ƫ]Lp?;)J= ,l`M_NiS>]όs L;& (mrgpDNs~tڨ Mje{'@&E  qzoŮ֛ ~QXvpPjȍl.(HaO-NA'k61M<`6W-E`K]aSjٖzEEܳ|@  nD`0FmUDA<>6  M.g!+R # l[ȳg"&wZ߫ra;&rn" Ds-aaY"By`, s 2{7ISV Z:8 IsovX/bm6 ݷ_^փNMt]`;Sw~b~zM6DkC>3¾c L3! l>{~{ʬ6hFш] dC%ol5F>&=~Ȳ%"ӯDzg.q\)s?@W+J6F"( hv"Х>hj! Pp ONTZ`q Zj'P+ʧ 5 tHr}遥1sn1&x̼)~bZNhrئ;]Drݐ_vIx0 f-H;r?ނ<ނ pE˕Q$uf<|%ŹZS՞CFLo.jTN 歹ywj"wi ţ+6*R8tbt=[s@ݜ'z'`5}zlPJ-A5ۗd:֛0,K-enG,?kaV8hBH襂Z9K!E`u!p=8?<{އ̫mԾnom'@ͤ^?kGV"G,K>H8FIJYlNmJgϘg 7:mȪiёv7lz cYu81xxZ!@̲c^Ӌ(.ܗ.Mp_;dkE` "ئ-K5z3T^W+G_yxq σ(_4h'u"t|a2--b6c uLԥJaRtIZFiNl]Զ:~*S>ؼF~朱 MLi֥w JM. !MUd4ypV o̜ۆIC#̍'ծeaZx`$jKL}zz?{|X5<'Oܐl3(1ݓ껣߫>ڱb7@'fBUƷG?>Wl?ë[:MXgR`^!beZ v muM52ϜA8Fnzo#[!6ʖqEI>һ'wZn,ʗs:>F1*ЇȬq7l!zM2-_};g Lr|sH#JJ ! 7Z"LN3.;Dn%=b `,;$( VXk˳[Cj~dI2Yɛ] q???WvYZ][[~ hrkڵ+4 t0 ^ XSr`8snlTSs(UʂElߣ.r[afN$"yPR:@Yn 2ǽTQbn! Q;HePJ"#f% p'0aLiSH4ߔʰTg)?l eDLb{efzB>L\a_Q%FO| Z4 rӰZ4ѻcs~AiZm25yG`Vt uՀudJf ((X *s;>]3\D&g:*3m8!?[7%[H}=}3URb!-UxP\`tFfnzprي$CJNyB08j.*Ud |R|vVjDN+ |Tv!i} Xe7RRROR O]gxXq)?u˙oB>3Wo7#Qݤc+d}ĪE ?.[͔Y>GWYc|zw=A!P +j[cT]5AP x@Z]vc0/U'fTk ]J$4u="~!82 isAٿڸ=Vs{"أv3QN])6й]S7bkx`f)=lvvΞɨ7t˙Wjބ3;mC;f틡;ZX&&CgV(ni$ ~UHrkUkYWq1o sO0DvhׂZ l˶5}0OK En۹×/] w"Â&I?Ρ,x)}#gJVխ:ǟI!2. ivXa^Ysր.[ "ŨQ\b3hw`ݮl6tvX,cLO~b6r~guZlG%Zن#Vd, e<@mמB9Yv7j|O,çv?P)zC4`NÁ +R[Q$ϥ'$䰉PƩBA8:NXcePqmb 8Ox\nsGih)0%$7CyU{ -L#T{Dl!)d--sHvr9У䀤iD6C2oӆӛ(CMŚ1.d<4Ywymz /VٜR|Hd5o-`jEV`P,)(Tc3Pfm4ٷd+[shydr^fے۫ #žL(2֊dlLo<ەd)ի5I,!:`L@9p4?»C QJ¢/y,$BGG[En)s0ى>|Ȝ3)|  X}5)scm^V70,Rq9 S"1֔9&,ᚕnY\,y=>a^^ilvPacY`=oLj\Ff.7LPL[>fEOyN>S|drPY /3S.,K:Vɘ=z \Cݻ=g"/\e6畅{nW( SE&w8am)f3 jS+#;CJNq\ ,`/WK.0h-r;VطڼrhN'c D:ě^]`CouKW+^VmML~Uh9]}xGOB0gӒ4=d~z77Q0wE6å`&׆`c?րz("Y1PynK.B#>☋ Yxz#Pq̹t<֕n٦(b!3/V`hEշ d$V矙ވ6?tn2ҭΙ><ʧQns'7/ǟK=FG /?T:@Ky^asmUN D{B…\#Z<3jֹ2;+0tH)a'O,L< 1cZoI|oq {A=Q-9l;dU0RLS.{=)&޽7ke1\# \ *gf/)p= B9Kf2gz2@⨄0̓<Sg )5k*)F,"p@4n,"ΜSdKb(Lѹ̇?VoZ#_r%Xhh>Es-us|:,#x *u8u}fst@Wex> TԠGTcZ X9,v-ώ ο rؤK.G#(➎lX.ONK8[P(=+aSW}Im&Xj=@L2kQ)2F{1u∪8P|7^h_1j-űQ.]R] K&j.Jvp& w4o p0cj3vCl dqq bhka-KC-ulJ5KlF̶iQ.m1v8sXa3hĀ*06"[OH uTҋC?Y g16=_#XJƷ}^i_*;mIR4YPy\+dp?ځ=ɀx^=ͩKxB:` =ץl $T4l۱K;\@1sXY0F.\>E=uk,R rSȘgE¥:jtL:RȮPB&тw_KͲnGdfɣ^f< E Uck\4ȔSk|& OXUAEE'N@k!N4vuh;۸V( I&jeWi(aIEcȳǚڈ.tt%@$ـ 5Z0+3\i!/ FW5Z5VX4:TZ{΂)C*Zx@ %X_;wMBA[ŷ Fԓ*eX\I1?ㄼ[-V-t*x4XYi)̨Mh(pu482Ѹ)EB> A2&d]g(M? DB˒M%QcAOuJ+j+;.jP!ՠwVrEɸAMACXG Hh ~oEq #BlCJ `L1R"8tRp 3k't bXK9(i+XGeұK!\5+B4ަ6ÙP(qHseԸQctdH^ 2|ڐ&!Jۡr{FS^=()H}Ju6{FӈB>իf !1&'OJ)MZPʰ*4Q]A׊XZ IB}fpwAQKQEn"棎1!EIJ4B mB0KyL<]3 /^tjA#ս }2AABtՂHc[o*yi<olJ` +J$W{H T*#d`!&SbP֣2:/3hZ RE"4iՠ(C Y0T⸱cмL}$kIu < o 7ZbM=A%xB>hǿb>MzC*KtK }CD?ܳD/SєAa,C)9iD:ZuIBvp ԁ>t kR([|Pcm!I5M(3{Nw8= h(Т td..h]qcQm\g$1SZ(ͮɀC Vzw7qVUp*yj,e4 %l@M>T%La6lmRi6mq}Q*A%9C+yYͥݷOpzK=.:ZmN2Cw8Z@hXoٵ:B a:Z8.ļYn׸E7]G]Hh:iC>y>K<Iؗ]%ޱa|sɭ-Wnvm 6Pu_Iyn1scc\PBR4mjyqmyuvvDkjMXd!]e}T\=_ Nm9,:ߖt}0g|4u_A|?\ xF?* ʲs2뼣=$t@lmB:焾@ Y7r@6|G#r ^ԥRMfx'86@ψk! kׇz\S9<{כqh௫7x6ȒE`׫˲зGy]\{}<=K.^=7Pt|]1'u2(Vm2@ˈ*mLR)UCXG"n;w>%qg>Cq5>u7v9Xo]kw5dzZtR'RtvKM4Y*MHY*U|NWքoZOUHa0,~7(Zii|L9,ZFh҂-NUCQILj$@$բí*v /["50'2s_Y/9+uX5/V.Wu;ՒVȷ^I=o_VI-|}Ymo#_ξ]iJm Z˼-j~KqN0|W_~ߜ_Ͼ{nVy?_MƗa}@_l90-WW]Q~_l6o=Zo {۬ȩB/.Ri?LWn9BX>Y `? ~.R u|!_ޱ8<>^oPSΏYop% Y"D˩sF7w]w3e[ˊR}J>gIBsV-󬠛5%)+F-Oj!F%}`|=U#Ink{My0J:c U5KntRcDOQŪliRғ429 l{+I'gmK]b05^tʹ&^;o3Ҡq|`le5 Al7* k6E^ĔnK'kfcI%N|ް?9O l},$Y}u,KZ^/=գ,3Xْ=ˮwwKUS= 2b@.AeO:C3vzVkN`F^=[cq~C0<$cZؐPnSsl:;:NV k^OK{LKIq-V0F1x-nRTuԷs*pǬ̫FQaZ!x["0U'׆qOǡ,dM .K[A7j0t>)ݕM[ܛ'äE'$A1eз,57xjyCԲ"y\L2GjZ#}<^AT@GUN}k TܷT#hK(,l{ TBm.cS8FAcƧW:&2 fKV<ˬdBZ?FpʒWc3x0_Jp cu8<}i1߉.R|W/Mz!J8v~\xAbn}v;!іޗء藷?oT B q|YM1_7Y mT2Q7R]@w-wyDU_gR @?X|y>|Y3k8g̞1ߏ1w4Cg7fQ'JZL $Vzu!v(=4Ԯa 4Av$m82OWw/P1qџrto-NcHGҲhARÊ'S) oNۜn+G?*UOFȍ&HZ&ohW* ڄv[V%"DZ}E+q\}aŋi[! @'=:}U+i4/xzʇBQEq8vd86 }3a+Z(0ǘCca`t㗁T m ٢1.*dǧR7uCy;0U,f!!7XPyyyUm㲰_ί:| z\:!<a1zd@2G&z'*F\[EFU/ҹJt)}@:u~3dȞd #7+2Fd 7PJkA)JQ ,CBH.9瞣|6N VΡD#EဌyPnEX?F ܡl9.v74.9(a7Q;䬣'ǎ߼lRNavr1L3Ä 1a=36zu2&q"Op O'O9 D{JKhob.ָ۪n{W 8J30x孠C C1SJv_~ ׆Ur`t_9k1KN!F[(\Yz@BnH 9SVD2VkyNENr>r Ȼ$s^3 [^ħ}˹ޛoY9<_ʛN䃰}O`_8P_IGh6&H Hsa0aO( `@AN8yNsºx͔.-(FGτV()5 9DL!ODO,hn0KGݠ'RLzhN~.^) sڬm=GbZE;oq Έ$Œ{Dh.%*a*:); 2; K¯\>8XI}MGYr͚yVֺ6**&]\Zq4iRI*XIITZhXYsF _,J1){CDrY#!Dc[V[\D{S$^2 L<spg98Xt,O. -6ĎItRܣW_g*/WIA9Kp0T(G eE $Ն 8O)룽>,T?.R=j|ˌý%@^VVu[^t}[?/jl`=[?OɀN缫@1٘1Yj B9]֚]>E?濾.$8&4hrZ$ y11+&r÷%(9om"aPyU ^mƴO5 ١JX[%+-< JfSa?tЖugFAzr~ ?aR,Q3գt8@KTJGS Z$hƑ{% YPչ9#7Gbm1E/i >Ql{_H{m{({k@61-zE9sTgز[2-R.B!Ye E8t!-Ep KoǕ3\KLDy?,b1Jibb 悷y$4J@lr0kQYZG-cYγpx8y0r @SQX;F}f[]W}٢m=]]AZv.䀭8z+)ýUAr5Z=,8~d4(.k*բCtSBrh2Zv+CHd|I3 _!ZꉏI+ӚDpJaF1[ם嶼]8*9:$F(qkB2N[$YB (r箖ܵej_˖U_6@v<>GŢ?Y%#Hs|.0N *0$MF&Jwq9KaIHeӂp@ L YTL!b/iKD@tTvaϬ9=#S%I9f䴃<=TÜQ>~*zyex>AT}3 `Eg/h ӫ*aEsǫb2* ޭGˁQ7Xqt6K:֨zUU׾FΤwh;0ơ]TuӋ\eO=T M@YvfSڗwmkW$N#)\Fu8ăO0,18r&@$R'8v2Xc\Ӵ@ JT$A#:Ȕw|`tbdd THFP)5FY#KǼŘĐXJ!2A«3Hgy)ζdJ!M8hly~b\$yO*&E zf9̣xw`6ʼnĩłs"y1TH" f@L / N$@U|- DNwuF f ֲ', VKdm[dWidWNu\Ut9&zaJqG(H:_u%qph GcM:> ڨ<Vzt8o12DzVBYyYE:eښ7}>_k>UQ0^ՅA5]]ѭwOF=4jCP*DO`}2W/|?"eyY̩1̣\0)93~ʙvVz}u ~7}&tnw{|/i_ZmB[{Mϥ_:ߞ^5ST $j 1 @$Ғdaڨ5gT3%-T{*EuBbQA~v~karՂIp_cR-W5eQ37h&*&*)gEbRR:a"҇Z.N1,5l:>Ȏm{\.A4)rk!p^ƾxd:߱}W-Xo@caj#5G{(W[ Ѿ9L0[l3ۆGR1϶߲@m⽧3hS޶ǒ4R̘IL͙p;τovBЯ/W} ^y)W"Z P9L!0 8ITv<'G|%^sOp:g~dqtD}2PTqQ8!(' 5KvN]\|He*ŞJN{aH?e6j܅i z˄naZ6[iii.=rJTW}k؟tVd fw5-h%ޡNE[,IJ%"j:rF-i,Jp+1dp(YFAq5*:{RlU(j ά Sg3aU3|鏧E HJ 򺋲Z:1;*W"y Y¬b.fr KIn])YY%6 Ȱ HZcL{ 34!y%YC1Uۺn;W͑9$|Ͼ30͡C6t1SvnxW.ݎBr<\2Ă!Jd*CB2sº畡XGR^,O`~]R[s7r^Zie˵fy󧗺 %Dx̖y0 yMw٧(*z$lI\O\~ykz30>haE<~xJЏPIy5“k'ΘJR8Wy%02iN1|"sD c`jEYH~*sݎxoSgѬjG9^$~M +m}Rhˎiѝ}TwWaeUQW?o#\tqZMפ [TgEB9Ŗ)NE\gG:eX^vg,ȥd6,h .O"(DDPE&xsFpюy8]?tϢ=%V,i >Q bY w\/ 6~pSa)rzh#E*Al[2J~ŬJFuPY*[ &xeTrl*X 74嚩y"ʳaIKI 1PRrR!ehu mM[xף/=X_9vN誱Xq0 ?n_ZWֻ{:] ~j6B ZV}chru]gWQ`]hPW:ת?O:v~Ae*Ok!l8Y 4$A5#2z ]V3! w}~pnNwt_1fI\cPL0Rl=.)*T A;QbPԮ]ð%O~puP"ş5`06Sfg#,HEF/p^:0%f&lV2 t@.x#D Ɣ @́'  %f[ J8$Nڹ.4I4u u5$Kڬ4Q "I%E5O&Lh4tc[⣨iiiԴְ#(EIug]=]Gɘs_(Oce*dFidFidFidO¿٧!4O#4O#4O#; 1-9CilY}٧}٧}٧}٧}٧?gT٧CidFidFidFidFidFidմJ Ә>̱>>>>4}#}٧}٧}٧}٧;Diĕx ٧}٧E"4O#4O#4O3o3* E_ϑ~~1VdHdC&$nB_EC_$E_"/!)!/"/?'܊ΏY8MupO|$[I!yU+30j;ܛ]>~z2Lnh,ۓ0zge2%ET Yf=F5Zz뵢$3k3ue ^wЉ$+ 3S"|05u)QHB ;\&Vr\[%QtL$p)EH R"uHzI)tD1iK32( $lK4 оb,+EME6;UwY(.׬,?nc '>.Q.ѨCN\a`s+nx:*=+B50}M\pSal&BB)nM?0nez;EJDQ6QLH|E(?4gbj#?ǣ&,U&Ike4ȢY;FX.zwAuf_{w+ї4퍾$܆ü 7Qـ o_Eo4iObM̉Fz0ijEy^"9acrq甇 W.)ؔۋ4UlQ$9BwdXJ6ip|1s{Q46of\u5e]/7-Eqe)2A]V4߆:5mVf]? M5ʫ\K2ifWn䣔?W`?\ 0r3i?d<^ɪ= r !!1WqZ>=noEx8nF{"?;dޅϸ ΂t4*M?LO7X'G#X éS4R|t?w&+%[On?Í]>ziPYUg9[O&]Nm?΁vݞMl{vSٻF$W zYl}0ۘ500}A-TՃY,%HUIe/V*_{$!.@&vfSڕ;n6{]%֓"r,hȣNbxI%&GΤ2X4R*dNK:.V|5t~JT$#:Ȕw|`tbdd THFPV+@ӚxՑscbLbHAiYc,ӉĀcUpI˚Rp^ `aJzˬ"ڄƖ'YN2DqjbR\)2>_z14ފS)E":c.D>1@^&HFK-WxGjZkp_6iھ;-[ ei3ݾ}3i>YAh$y9q,(u:Θ\c!nDƆr=I.1ՠ?_)$7*[8y!_$Z񘻠 6 VEpo$ց=GJYvڝ2춹YۚGo;ׯ%V5}o{.ڕҦύmG6!qC~^d0Od%GQ:rtf]բpZlQx}׫[#g= >0{a!J0ѣ Xj\1ο^ -Ա %gsH<&F &aDrfl¦|k^fݵ~Ի辿Ji-OƦmɆFeCj"`ֲ7nxb҇@r<bR RX (qD9"x d9wYb-?M@w{+->\CȚmk\ty4ޭrMZֶu[4wܕ < vf Zxs!Tc{O{f}eWۜn3+mU1"~ޓ~ 1jiq_>Z0 _ԇL@ h^ym~3S?v\< FVl4[p*>j8r` ? _⻿|UG0G%_:(oZ/oÓ~8 kFX ͅ(nA 81BR Hgx9FEb")alZ*"ƂKpJ`SG$A&bW@ɚg1O\1Bɑ"Ki{A]2ifZcޛ:W'UY$;m"W b-|FP 0_|gVC$/pk ?}Tl _ma,iJ g Po{ af}zZ:ij^' S?{qJ&\yWjd)ؓhj=ƙRV(5JIMs0ðgŘ}i8O í>NeT}{Hka '=6 g`:9u*T_aLLDuSx% 8P(Ar ?CxxA Л4vB+kXg.2(g%BF<=U4FWb++q߀2VLScQ84 L;`^%r0zXƩ &~OnQ" 99q) eq_GLvq9^W3>f&e. EgJwEUpJif8i?{Qr(z/P/}ީ{z4;H#UZORآ+atL`Rc ̐skHd"`:%"xbAswm:X[|q v& k&? -$7ĶLĸfёu݄Te_í}^E%}_GQ%=$"p>%i%@ Bv47iv*Sq.12ګ:+y^&5'Nq9$XrϣÜHͥC JXNZ۟m/p?oF 9I'Qdoav6IhuSd&76U{qrTo,1Mکua$Vcb9p(kr &cQ*,-*4p %.3{@X|>.e?kB/KIܹfqnLgV1HTc411 s[ eP`DX &g F8;fa#9l zN}ceAHd|I3 o!ڜU)r*(LkN)C]=p|fT6}Y ,Mn)vٞp<-^mكuIoJjBH|Dj#`}.NB2N[$YB _ ߋu #s<-fӇ nOc_VmKw ]f3Sbe4 L*/<(jJ6W51rDŽukgtV!, w9g-0b0k,*SKl->.$u nNccF@PqI*‚4e QFI{+f>_c[G;uHvJa'<#ĻrUbwa:7%wGM^rNeEf?Eo] VHXjģe1j$*[4"t'I7!h(WRLp&{.1*#)hvE fm=lulqLo.x[C=:_*_DV^r|QzMڟt YI AǍ#QR^ ^nk( 2aqDGI꒽xpմ#z B+ jgtDv7PƄpt&q"Os):URajL]$21 ?FJԹ Y;.'v\vW:9fnI'~8v,z9I8س{O0 S`BAkhzw xO6݅~f!Dr{yxEW%;yn}YH7{;b·AoVYu*f S,:oSgM'lU^mVYm{ȾI_ufי?)$pOo}뗻:ʀ8!<asRΆlD>4 oSYed[rqq:Ac.Dtg0"dI"\S0S$ K b"xi3*qct MRH(Łq Ǚsg5=D!Ra^ʐ csLgiiY'cW ^PnMI#Rjߦh?n]`t}e[VYt٦aÕ-0uIXNĖ`/Rk1Vg=\%Z׮y')N UR,r o"DGD %t/^ʏ5Ae yDѢĠ0 !O^['1ʼ`Ak%pr"J3%RR_QP[]MmH`,0eDE"qlV6`L*FQJ=0R-qz*W!2`IΛ4pqb09pg|1@JtS 6bWdGf]hr Xw!Re-KXeVWABUzu0@-% ?u;yOGg_K/Gߢ8 O5{߉RKɌ]C6>zOȜ33 Hʅ^|> MS˹\Lmf$xT$0{~ O1/=rѣLCܻfb־\io%NbwOQ9os|Uc㟯z s1ѤQr8$~w:Y ~T`YFZ]lW hJu/'X~pxITK I[G ds]}wQnL{Ytx8~}{"?zh ϘL@uiIx8 o NX 2F0jcQvN~Jj)&V >n ]ϼ@fByAnU_–Ֆ6l~nA-mLJ9Ajlim:2mב{$<%d H9 lJb阶%b>p,´MmQ'srHL<$ #g n4R*dNK δ8MXUJb&9ԁF3T&O@`e2pZo:rt[I )(m8kt: tx, .Cq;KNY2+6䢱9vVpE:=QTcm pf9L?;0iDb9wcv%OL3F 'RU*^_'/G Q<M@NM/і= Y[K=Sffʑ9ZVEɑ9:N`h3$W&as[#!ܦ)+?Rd#!-uI9Fe  `1pB;c:Y av+ &>mnu纏cv[۞KJmB:BRNEz Pv}B&#e .@8IDmS_`ɓˣ_2;=zWh0U[g=OF_k6{a0J']p*3K3SYSǂiϩ".DXPq*FRi72l¦4묫 gw'U~Ov+ǷVy۔E,||Y™80fac5ɺp|g*LLSDM˫n{{uqtߠ^tv nEJ۶3.ty𠩜SNS:T;zי ߻st+ZyL[*9cĽj+lysO m3GRSlmeZU kQyn 혷1;T32iq`[V#wb*-]y]E_Կ}<`+ ? xۑӎ( uD*8 <6cC^$oI1-SqrG;xy8|iRWuF+& [wP@nϪ.Q&)\'FOxBt!$st2<$;q2Jp䵦VDCC|;H+{ _g*-g3LQyxFCؐT}Y56p.jň%0Ys#Y+UY;tXYVYi=Yfa8u)MQeztòfiNK*d!:')̙M.ou$sIr|t|V*ex-F C1" YH`VĊG}],<Y E8qM%0$of-V,˥gY{D_h][#4ZQ'WK `ӼQyF?>қW͓95(j7wa5#+xA/?ԔԅlgUs絴 ˋ٪8, ל|v@o%׭oiIkL ;_AOR<6i8O í~~\:} b+RZX" q r`SQ| oE-j2FdTb`Ŷ&v3\v;qm\6v OL` [h_|Ջ_8w??çV?}Z/[ÓA.`Qi͈ðKAXL_/!b`l3a?'- O^ӵ MuN/k^/wk,Ip#Cچ)QJ,fI2(')Ӓلm ƄDN>N{J1gcD(R݋xo^݋{v?;K,۽^݋{v/۽x5DpR݋{v/۽xo^݋{v/HSL5^ %va gʊyBj8^HQ[$\q6 u9_r3)V@ޫ@~q{<VDCߛo,YԫC:VaCO—N佰-`zxD# c*$v3s@6J3JJ% Cέ!9x) L`hR(hޓۑ;i;ǫ78w}A-[,2kq$ilo 0RSIy!Mˠ&,9O $Y qL`QCS,L0#YamWէ@LjO3r8IlG9K +JXNN+|h`khl1g)%/5Tk֓ݯšk6ЪRd76jv{-9Ħ%I;NR1, HxLRך@ʚ3b@bQ*,)} ~-.o]P7~\opz^G+vrj)-.!Pr+G-5ڣYVQ:k&z1=dR}SD͍&dž~_Yˍɷ]bf`CKx 9F-SF"f z'>_.frBΛ⵵ <2 *?h!\ŵWcڧwo J+zMɮ!kt\V%T{PdgD}&ZCVb6(E~,?\~ j=mRT=2b%AF;0ܛ(I%S3F.r'0Za4;|. 2wW(yk@6xVrEN\w0sXSflIX)Ng/!qe E8t!-Ep Ko}%D 7铈o!GZy`L3xdF2":MQ8$#, N#T6F)X;ML!=s[ <pF%&69k5I8:RkPL:0i8,vn K%[枅MpDEٝ*{Yz*{]QA?Uj}0Hؽv-w;MgESK 4§WͫAB)nMwȊk>./̤_eڣՔ<5mVfoܠG{y0P {@-}v\ v# .97(ˏ9Ҏj@_.!g̦y滽D;kjLuROAoB^yCcF92&<5{ {wZ@7I8xE;.kvF.d$,KG4-A%b=HTu)}T u+1:wj+ &7g@cUhbI'{*-(n5yߓ.=թz籜Ij^ւ uv5/6=h)]nZJzQ)$Kr;w)mBOz &B́$ɥ"D IZNʣ29J$6HrQ.^h{~v#hpWUZD\o=w*$.lOdJeVDzch9B< 1Em,*ΆI&q&z,> [c 'pNI .$m;'+j,\i4S*PurfC|W`v$#Ru 2 %L2$n8R#2(˥ɚ:ȴԿVԿ|-DaGÕZk@/xmL7[73ss8@/TM8 &hY36K* ] ;o|RA;%Ns:H?9 Dh,@C$-xk"&D@[ ZFy㻢,ں^릹ըmǃf>CݷׂVێr vCa)Bk!)uEhx;X@խp2 T O̱dexPDzVRm}5WGW9ǙxE ?U[cB9l%d1(Ǔ(}E1Kwȯ2wB=,|n[[Zbw #֦/2 n$jew3;sŲ8t_]2cuLL5S DM˂iI+;YԣDjjslIڶqe6z>!Nc:V\IZ-Wӟ۰cy+Z@- S!~<c1Ľrݽcm7ƽ)ӆҴV,~M[&(ΰM܏y=椑!Th&/x4*-<6Nz5] MM5uJᘷ%I 7}O̢uiYH/D B*~RWV'(DXWZ0IS֠Pmy۱ͻ_Sw:ԢWr68yYF6}m|9}zγS&i, ~gr3T ^ yZᦩeRV@A]U o[$q1=~g2@HK""[zm`ClX.e%JD9Yԫ;w }͉F-iX)8R$X&|.lks(֨bL˸L32^iugٞ;\~g=u*.[wlQ;gW/9!]>!9. ;2㗿nQ|fE~r/wg{7l'?'``T1ǭG4ڹNdFF $R2Beٿ]B)凳p6Uu6uW?{‚>g'9pjjf4d*W~\1ƔBdJ<%*ƀۇD+ie FOT&1hE\I Ѓ.k?-o>@}D3W*J!D?`%jkePZZxR"X.[B}tv'2QD zBKz72Υ ܺ\$'M$J")!d`$PO1&$VʝrN_I][n[WeuBֿN(U/F쨚qtzRu>>h~_bңJFuPI-4 \ǰ V:Jnē-5O1$C4:$O^$o#y&e5JG#[(!jbkcp[*PHV'æ.yg+Y%d[Ξ|D7!~d|76)\SWFj.V][zG;hPfgںϬN쎮RxIXgY֡5nytȍa0ٍT8 ?mPAeӺE`6Hk}}6f=/NpKtͣո_)t;:Ugf<4gӟ5ݷ "So / LOlA/Znt֔y`Ƣsns 5*L2JW@K(tBB$T,mQ aFsq>Ƞ0^J.<iuY >?Yx{iGu/+uX!GtIx{^{M^/Ru0Ӝ V sͷJ}?{TOD[FT!&@&wIo 4HҚqɍTyĄ6x"k?xOIkHi/YbLޓlSl !c阬gA4) !*T [aEuzsW&{e6$7Vw+"S IH_hQE~(Pl Cۍ"{%%CIvdYeډD y33g9S(yƬ@U֡TȮ00>` R| {v`e! +ƺf*5frLZEʣ彖_)2&c {tlб_HǂBX CNBvZH)Wpd gvXZ؎i vJM;k ;R4p:g<ſyL7Ozi6/)"u( s)f[ 42u0G TorV'S;?##*)aXLa44ؤπޕbxs$JZ$lZ.lB0^d$J iH>NG L'mdAdݸ̽t@YT o´Wgro.nwGJaŞݥoA>(SU:%!D A)/Cݟ Y$)k M$%'K?y䋔L9=t9H@BFU !Fr0I:14p>vؽ{[9OO5T|nm[ntQU'l+^KN@mvHCzN m;ҝFV~Tޑ[@i iRdHӎ0 IEmMiUZ%jӦh* } d@+[ \Q@gak%LAM4Exob=.Sq㫴Œf~;n^xw)c޻V[=Z߿.]ܶwEZf=HkpkNǬMv 圭lk%I $J0W0 FH(*%.Z-0+$FJ:d^y:>{'13:cPk^_mx9JwϨR@e`Ih!+PA#︞H{ѡUA* IL\O >,/gy\I2IH2(dFyY d $I,˺\x=FNtpl9_"hT*n o2 KBr! |7nHzZCG Pcz(j]'x6I6K-xUՠ7V<~q3 ZI"$CJYF'/Q5g RF%'$ `[$iWZ es,@>Ɛ}\g'S= 3ca<Vzu#m^nwz+7ӫ+Uێ VO<! tPwp.caʻb-dӪq~`K䅃d; nMpo%6Ed(ހx*AI"J4({?)Rqm^W%E7}o&G}gkZ}QL amN R:L [j6~X:o5VvնlȐEmubVӜ篤+R#rG9Y:΄bKV1U1rRF+H TJv>) %Od`@)"cHYGmrBme 1m6CvGu}$1-ZAܹHNB/szy>>j1j؉)%:v8>!;AlD.ja;~WwEQ(ahρq57A^L )IW p&,>B1="D2iQ,C#۞x4wt }G,;˽+/=\\sYޱ Ԃ~KCqY\Mf5Ef4.bRc0,WrV}_UN̯땭Gt6]D(4ɋWME:Z~񭿍$>wg|6&-=Vi^_^>n֮R$^MJ>zjLwWxk˻+/W (>LjxF_.h>VYmThJlGXVST &"Ӈ&z  夎ᅛ1&_vVG+-jʞ v] =.(ʻ > n; VJ>OnEhm 3m܂^YԁchpdU߽6?RGͥ=PVh2D22$AiMŊ(,#ELVǘ";ukGɶ342ޝ>6>5] %_vmaUo[u]=j}w{$uyg|IV8IBs YkCVd#P;{wܶ iJ)KaNQF=P%|(SʠJUPl:kV'=>uz}=/βܽwS,|=~mu΅FFR8Sr:dHFc.# VM)kQTqUSC˗LSwɂ 9ӱQg/P3*5^.ͧ^Ɔ3+?rJO>-ؠVzoZS%z75M%Kߏ\R웝nls1oQ".Һt!zi-L.Ym7@LΧlSdηŝ uC ٣%djnF͛QU{Ձ/ȭs?!hdJdŠm}`=>z|=[ԷKο\\M:o#5&thL!FyQv$jK&(j'$w ك6th1ծvD@0ao P$A Ygb.E] {j<6h'w#{ D+Мw5wL{/6W˶$頜8f﷾}3(FԱ];+r7 &loJ;V-ՋEDAs(vn imgxwR/6O?PZ1ؐ 9YmVC%[8p;Hև\ ɛI![?Z=O/֫vW>b򩁗l! Z O4JJJw}; >'D=%:˺sZ7D:>XpY;~M^?͆Ψ>Ҳc1š[nǖ΀PjԳ}4J:ŸvZi輥ر G8ʁ`'CGCC&ת,^$G%B&/& _x %( )=#DO^1$U9W$O[gzȕ_i0,^,s L0 ,kJ20}eE7+-9 75M_]X_.dީMr{j9~ₚ gӇ+ ñp;+޷˷#2Jr]>_]S{ 6{`.|Y7gX|kG/r{:fi| ~[fdw @4|qwFðivȆF6 i=~}XQ1Pzoyum=#5Gҡ&^l~DsyˡH,DWQ뷲uv= >̎B#cC|@Ga;mB‚T} B ]mPˠe4E 2벐.I36`@˾! J*. f%fdHbbE51Ȑ@t2а9:tYΘ6jRO'k!)rQZt@g4 X:б^zM5휚vvXt+K%))~8צzvnueVwǔFB˫j\r1G42⃸ ;cNz:Knĵ.Ǩi22wL6+V:k$trȠ($ h1e% :z^̝造PpE`୷@%Prz8cQV.s#3+kz1itJҝu<]Ǖn=ejchji?VmrDroH`0JJl䄋Ba\ETU6IP@O1Gcw`f3(ocZThbȶ9jytoi+%) V*rHwI+i |a' \h8re!_ĵDJM{O9f)n=nƅpt37czClW৆&͚?5Bg4.B#Ӎ17a?F ꧮ}C4Й Eaj92N> χnS.cR?FðK8-폃O> F%nJw֡R[wukKY(yJ3T^qnRE)BQ[*} I#G0""Y^qoz AjM#䜍d`050 k 8zBҫltLÆabx HbgW^闖<Lq%/xҥ5<WIrӕrGOMI(&y.lE@ȄHzSOg7=;MOoyޜ19!p'@s ܻL($ UHF#},4-OSQ\|g8޿~gPV!#y5 (3&" s̖*J9AB:k}5: RNYB:{, :gڞ0^K񋬄jAOAo܌ h\ .pa0Ƚafɚ Yv!"!O aތ29mQӖKqI(Ϙ75p֞yMn32%rm\{GO> X{W |Ӵiyr)'I''Wס".x[u$Rw=qǭ$ѡo&?U8iK4o)$dODl6|IOZqyi'Ŵ=W= qYve=F]NzO2Te{NVUgP[c,^_EGr %L!dqma$CTHuT/ @F edZ([^UEYW 9۬ROoOҤG&{V2F],jKe8nK+2h(7f\b0$<,?:i8a2/3k?{u"^i;vUƀ@+<' MzQ6B*|>;,?vk-ӫ[smr߅Mr>@<հnƍw $\>GqF||bܤk:M9MYE8,xTN<}k?n溜pRJb77Ȁߖx|W^qCrGH`݄1m];=]fqm,yI|bNe/W)!xoLY8uGY [yL|gCÇ"3EE_|zkFM=< :\Z~X`^z'$4m  KHU4򏲨 F`hTOY(L`e՞W;I>F p\"Q1HA4_nM=._xU@gih=Usϲe~QnUHi{,s$-idH̓?h'+A_&y2=\?sR~}E'wyuאl5a'r["bv"vs`P{0L!E%u 5x%rqp :Zӿ'tWmal03\"bZJM'\p(!4Y$H`qF^%' {I@7 YFzϞ يute;lG<3Tl|?̊,/CѧQ^ӍYha Y7?oplbwr @ySdY`q.Kd lPi6c6qC%4A! |ZGUjʞ v>Ow.S%t vc04'xKzg6>_:^hS5!=rEG{|sS_F$j3OZKBQH0:ƸVe<䟧)rxȹcy+M .OOc;}m,>%]Xğs6Ӱ*=uM9 >w}$evֶ}[K>]Y`*hfH$92NZHړ*[  w*q'&[rHdI, P'|e g]uz?˻β]~ԭSt>w;~\pW|afgG<%#]4d$6&ۯ\A 'S8mʒexLE9DT@_2% vqsұQ_g?fUj;P\ɧa碮a$P̵ >ͷxyg79Ho_D/)ɼ$j pnlFеp(u3Q"ƫ$en&$뛠V%g0yg1jJSy4FIҟ*їp2WYlȚG:r<{kmH__@F~ In/@,ppHS"e[9wCėġLc$iW%|/fP T-s"x8Ak V^z[e!'tu2~O뽾^% CtR+e",OmĘ3!%5rqyc+Žkۡ3P ]g?/ 2Z{~.a˶pUĠkmU#p13ĞyiE\i9Z0NF0:g|0gGYrmfdVU\@c =nMCr^=d< "T^A M[h4?^GI{4kxO?f~~廔%B"uTTgBow 5Z*enIR7b.xMa+ $pc)&4`Ļ[R ONr:$T6y6r#}H,*%E _[YV[ DZ"& 軉8Yfs=R1e@PFG.osYM,'8ǔGgd*p8tRJtbrYP`y*A/>E}kٽ_E4 t4 X4ɱ}$' MB2q2:JEyfI1cp)x7y Z\)_BcVwn~ K|6eKHj 5~U^' h?=zR룸^dhUJPY +k<K YZbs:'UkL*/A%rn@uy8,v / uw={w0cXaR+4TUKgUbp:,uHV(*H `%B+lgm՜QUsVe,!! !{d@Vd:)lQvM:%Yf6葃yo9mf1Z9 \*;+^M 9Z|gQ?״?5%60W?tbύ`Qص-t <_}WOMU((ςB잮]ӦOpIΡգpetMpzm r`SZ7/Yr=.V}{5eE-9}}ϣGynz( SxFy{;cz\f\Ltn]7ۼ?~ݾD򼫚[:oVmR9LsF~ꟃ-h**We1kFia݌E{@4)HAdV(iWH4_˃r#:jzya^DiPL;etL5r6gdز/z$D|>͟Wz=8?O;u rd f5ېd6@K^~ɉCBO$H02z mc~)\{[^UVEJ{9=+ט11)`z]>:E8%L [sk?-V-ij:;n^Y O~=d,@FaA.  E.AWQg,&*YЦ @@wʾ!Ri aʑPsbbA$LPt,# v {dx7cr &Yf9D"FI%5$YJ6xl}㴎:N;%j;S*̗˕ܹ7ieu6 V4<̇MD UWN;" g&v$z9[\{?s|' s}> L?|<+=lܠZ3[gYXRf8i,,+0@bFHK "C8dR1 i5/w%VĖ[΁~4=vq HH#VMn>/;MˣÐ`t~˶Cg!f88d^Y:A1-WDXiJRV)9:+N,K>8e K4j M$a.5R%9b@ 5g晳S!("ܫ4LLlYט9~h Kq(| HD[tG_Kv_h:<ˑG ,0"LzVR }R鉌ΤBJHL iʦhhK!g6}r,k%~M%~6\Ѫ,vkXd/uuFozu_ TP!_?>Vp`NQY}%4Y4j\k@0]T}z3^+|YqEz*OcJ|U˩$T^z}Lh` GęYGh[vXL~rѧ&}iHz0uiTP=v!N?&sszӛowG㯋= /WDwxI&.J+u񃻋zJgEjlQ,9$FzcɻT",LyRZX7\^2 4Ӝzi"pI5ri,ڷNBXS?קYO9Ky(Р4fHQ,`I ]Mq2pL"ׁd,R!,CDePٚCE; +B*R&x4]597 Z<6̈́b'<UVm+6 |ΪʺR=SY'HHo=']kWW R%Rx`裱&>DomTDJ5m8`7KCLJ[v;M2zQj%7v$&\z.HAypioMvO OVO6`09}%}پ)GrЮ#}|v@h3j곴F+Eu%CQ8^b֋kustu 쪒x?/KAղe>ȼr&$| e3(| 5$9Q'Ƥ cC%sf7CT eƔґ3VEr2e[k9N>.T]s(S٬gG/cDn;_I;76}-ɒ>dI$BDJ*A$]D5+Y1FeBIfI:>xIHBs2C'`@N"tD R0RrvE`<8BRct!A dy Ho6j BsP ;ϵS trD˂ Cy @y\ҏ5!|"A-x>RBo~2j~eR$%\uсkR!)/hsFD|&D"} h2;ѳUtI=<MA e/%$qV>ƌiE \<￰x\ϔ!! )|_302aB "rQE"7Vu\ןa1}ìb ? ԓ@Qۚ]l /! =T(\|0W\|Nl$A+X>}KfʺN%HiЦ,GqN$S m&(aӘНUSwU\ iXÉ0? Wvh~`;5{\os.`<fT0ox!+,4mYy佧Ȳ3jcy%4K*`vh8/lx2 WHD×ŗ[zȷ?{V8ϙ҇/I [5Vx4xeIWNl'g״ޓmdW}d'hXޙ`2'3.Uꖭd;")MHYݑIzWO\ͽ(˭גz2fQjsʦ@BºpåN4sroS#T Vz( _''58T/oz:: /O" sr+&[8Lӻ˜t6M&Oo[ܝ L&)x 2a꒴&ȹҪJ+O Sbt+ZPFVa)F%^d*Ͱ֙yqFq,tG DY)V**6V0TxlYMR3c)KLP߲8ur|B7; jJu|6Wn?rgBh>BOoOz>ei%Ĝɝ$jA$&gbfaḑiX\0S1A)yP>-67`T@ˍYnǣīǟ/~& h1OV //[\yfhQXYv"[zA/߃t "g} kC*]LSu?Se܂m#Xw/ a} NO YRr;CIRR+iRUc`. lLwPI`7gBos[`IXVtN$"2BT]IP dI@bQ ZL1JٖLgRl"_^N'#W3+ӻd au[SrBY ^b>CcA~^{tUlQNjQv{銈5r*xCpX]a(C66`b42·,_ 4).3f TN}d1vU 9]B[>Wyeq)*}̫Ϋz0Zvꕍw{6|ff|A?c_˅SL 7Zį//zOgbu2dC_1(|?맩SHRodŒSB)J$̼)Y*P(-I>f^N1ky\>t[KΐhVNo[G^She ܳ\+2oŅ7~|oz.H7cZI.1$e܁fg^JU:c.ř2'bg`QPY*RK<^Vr,bGw$t,S'JͲl =8x\i%nBU%䯯^1dqVj:Akw}wSQ; D+̲d> }^  /x8//7Q܀*:{'3XI@F2Q&/љxH(0,J;MXʣ .m0M2)afN 7ΉT˭/Ӏ3p&Zlzo†"E-pXxJI9yV+s@ v n. `*p?޽'SnvOW@PҰ\qPuT8sy9lTRg ~ +2B .ݝ;Xƥy^Μ[>z?t("wfp7=!Ͽ`HNyޔp7UӘ=%Ucp"RL: _#'oS0r3t擠_4{~:[Te/2yNgtdΧ<)]C-e1k^ы @>_L'_+si^8kHMEM"sUT=9}cǷQ 뻐T]?_c$D`$Zo˜ښ#:_;Ոy1o#KDe8?%;Cl6,GIF:ZWg+i:ԅg_kRHWx*u:)Rs9saUjs*plbGɖYtFr^ATTd=c뭻ǁ"e/? rYl*wcbS@  7A$xDMU$`P$>*Bhj!M2lHs$ARL1Կ6+W(MYgP~Uhc EE>;{WҏMoEROL}j&B3#F0©UWr"%Q(tk.9V0RjF$L3D#>]Xt-֤Vp*ѭ"*G5Ǟ#[j  ǰg9+#LHrtI"H`-0ekw0LR3O5`uhG7R` pefUJzPpRRڧ ,U/}\˰aYcv8\LJZuBYN0'x<]<-nT.'&65(`N1Ptng˕O1Zš, ZN֣: :?SasY&aa> uթ zkDASIpBJ, k˺U$ CP@`;po6I20 Q&,E&1! 8#&s7:APQ0_T[I4Uz+U(Lo"*r?fq%a>3si7SO{#97arL~%⍸9sG]& #Lp #"cDVLj̉T!TI-!*2%3e$'HFt35 /hᏗ[ϰWh-VBZ+͝ՆPX1TϨD'N:5G>2|<:1c7X*g}0ٜP& "pB Rr:CQJ%SM~m0ɢ5Im]}]zp ;Y{F4ˬg̙*.Di0bm81`kuiڳ |2[$|8!|){NCʴ$Ѫ4䧬RF2K/8 fTj)BL=#{[x~}c̈́UbR3e@(w#/OКgXNКt=Cf<)sRS yO)juSÈR¬ chİ$YS^X(l 'Rfl>LݝsN*<BR B,t`JW0+}ChMa Xv8kqju,{CƋ It$ubg,irNJ  VrfePUݾH QnX%beD賱ܠS8KQBriiITCh>%3' 8G`ѧP 릷yD {_ShI-4"pow=Jf^Iz۝ xjXlmr)VϥwL~!ùFTK.O}kb5K+qzQZ 8fِ2lf?;!COi0aN+2Fd1ݓ S|.%G${a"NI]cn_C /QEYwl"&MfͭX$egK Oʏ[םB!jr ܽҬIhf'r Yd:Ƚ.-NjےjRA^ ˱@ǃ? RhMe(S+,QdO3C0 NZ=lQ8HTh n)q;;P< kL;/QC# #:@#cwwRDU[TVniM3IYMr{T9[wO߾> Urus0E6G4^ |'2Ny-X;͒$! P'lJVUB3MJ$FxaHTj)PQ WJLk9~7&rڰF5c6Q7O6+b뀳&#BbW3vL"f{ǻID^-{n$e&] IkaxF%x;N{$~yG3( "r(f` ɛXU]pB]o78@D ݖK;{gAiPn;K$JF2XJ`H 1YW;<a dJG`f8cD:ۍX'1Qqi>Y v4q"0 ;-OpN6Nl޶.%G1A~ 5&p%,$ mbPA/7Y@/)=;88: +'gGFc9w%ˍhU*1̜rbH[,v%(,鬨tfa=~i?ne$45RgukXlS*qS35ï`!.0\}LA<񷔂+gޅ`i*>^ ~%$?bA[U*V@7\SDcDh/p4vlu -y|햾.>$)vKy_ܙO8 o+@eaʱOeV r A)i^!8K=.+vNDB P*RΣ E6ir$6~'n{(;_CRQ+XHUh(f˫^P>7XlJTg8^?\w˴]#I{8މ=cl|y$݋g+r;v P!.#jiM{ (4ut#^J!V(MHϛtS*M f*lဣ*ϦKғ|3LۯW.jع>^t"$vi9a+@tPtVaAGCZ\ y%LZ`1{mco{@Ri:0Ny[9Ĺ[pLb:#$NxE72s@_8`P 5xwX&`GjGDoaǗz ZyC;F0x~i>$ibi{ad? È<^yvAoftUCĺu&5$UTgRY} xn]?X&O0 / x)T!vYոCXJ+ *yP Eۣ1 3r} OX,oTY!f H +@UYŷP}˃L'X4xQ ({qX͟9Sd{Ntz0XEm( % ]dB͇\'n`dFЁC(SҦ 9drNaS+חUZwՉJNR ב_tHR!E Y0nTG۱1X-|(tTxn|\ݸg0זӹ3֨F ) @@J*lnЩG˘-|uu ~< Հ˜{^Sp:qr3vzD7jtOp-:آ;8N4IqI٩.k-I΄Ē JR*ky%A=L[`9G6*uxmR#U](Q1Ѻm\3_9GEgk JHT,Zgc fn\Q#9ww&ŢxĦEƋhP_~;hԀ;5MIƲBI- )hDP$HE)njkCϔag _zsq>0Ð_65m/S0xo,S0q hR NiT4h/|·Ȑ,8Qӗ/-X@.bN= e(D`Ccĺ>t;<-c׫xYD- ,2:;[[< /޲SayA._ܟKznИZ"#肪Mpgd^$.Ira>_;eИ1NugzKH;N=k,;0Ok݉DzhΔR]޾ˋ9c?+#cD9Ӻ*P 4ֺ- kA38psp^_!s ѽWr~(DٛrzhL󑱽Qu}?QepИrzIw)r %!T1 ܽ Ae %Uș=4~c׾s w-,}0E4<cI9TXNUcNXG !$jwW>JOȷJCa9hkqFf՘y+GmSviEe!uv+F(<дz_vD*Cc*ș|<3rx/^*.j+$nO˟'ȑ'"XHƶYTΊ]T忡Yn/[*-9kӆσp )O= TyŹv^Э8fǬܬ9'>y/s'%1""+7PVt6.صI$P9eSx2{x)fB7,,꟫OG!ei}DZ.a鴱<(2m'UT9FHL#94Nś/A˴|Bv%N ~YOE `OyOf XFVorQvzmgG6.Z=t~ꡬUv>v|)y "PAJ{ZxrZ˗S2H]qUm"&ޡ1hK'+NS3.<gߦNd%~!^_`"_NU2 L}KȤ[UGO"A[ u^ӏ9,p gGSr!OYHݟ* [#U*H0߽lq_OOo|}XK[w׿JHr*Z=-ʎ!!UD. avYU_f^nL6w\5CZ]B+?a@PF%Jk zN)|!$0럧D)80&3O VTYKUL!*s09 4;kw}۽>>!ns Qc(Kn}L& um#ꡙ!RȼMXN qIDCwY<&9"j%,=Yo/1d1R  9PFaLSc~C{!GAF|CVaħm+`quLU#GCX}agFHum0˗,̃FYleܿ`~+U4!:`ZPe)mHq; 0^pM~u\sh֩f>n5xVai)*WSF6<^6r*6LAGfra}` \qD ܸ nK$`xD<ۿqI)J w+i.bBs[n8_KtI3Oǖ2Jg V*!YX!C)!%"g]~M[Vy 6Ade53z !}cMg`7Qm'uuo8N I0#0βE5F$1#X-حWyk *Τ <`fkIZTE:hBI@LfXJ f}6N |'FB6TV{]C)Eav+8@-gP;B5s o!e4/ ܛ%h^*ݿ>f(4[v7**mP4$!LO`#YΪ/^`"RaKL.1E"i&PK2v=jf#" UVNZw{:)iBB[vl L*`+c e:)A0[Yx`IӚ'^k8Q2g5lz`j\b'>4Q2n4Ke)mh%- KBfR=r0ѡ<<(t1&"NѕXoiZ A3(WU RN|n 8S#}lwtmSaUk@W8quCx֤P<.vcS]9RXMQ TPaKwU%Nz١c2Z[2g=/g#( wBj!GN8Tbn=4~U*/hAC A ҙ𙪑dK.+~?_-sΒ[^hN ?tI7H+m5K:f|[ H3 gX)Y Y:+ l=/59"o8;hf'MW`H:5MU\k=^E7&RL]k+44o %%x&~ԼlV] R@4 N)KJ ǀ:yX<+*, KS‡~N\-d| yd \31gso7|3~aɕ!/n7f{R.o -W=t^aS cUߤ>Cc{[#j]CcҾ[)9/՝?d^>SS`#I#!&Ɍ˦hz<[N#v8zshS%j` 4Q?bp-|FC`\F)t >Fs ٛ#AzhLS 4W ,Q#\^Sdu¬~-MGqL1|rۯ_G˿"Y`1l>h ~ àѨӖ]:*%uX%JbȒTV:ĖGL$NMOx3<$ zY^-0|ܷ lo;PˣڧXDUy{VGRlk*~?Rf-f KM&΃(;9?/WQgޒUpO#èx6&S MLaIS-E8bw<#Q> ,(Uũ}LR]?Fe޹eZY{NN5mEAM=\v$D>$so\u]-_@lhH(z.6i{ÔTg7!ڐ<0!߅j%`2x%Aq89S|(=Xx`/\ygqM'X?#:iڀYyk/ Q:fXrppXEj%&ŕ0DhBפ}:ȟ(I$tՑg $| 񒟋5iE9ScmAEy$pcWS=<7m,)LM#ɏԼI4~ ϪpEKܳWqTOׂVRI|\]W`^crmN+Qy(iz[0bgfN_2Mfݤ.AI ƧΟ#x:?F}ؖ6my -2C,4#!눳(a5׋лz! jٵϢs-eR Ō)!T5VZT؛|ϝjLaVڝ>Tu樹y:qpDΠϧ\S|X` e?_r՝:9߹vr^^Tn . fz 铑 3Ҥ2w&$4tS[] ۨ}a#?:ȯr(JPE.HTcz/Jө^(; Iz& L>}%,GUQϦTmJ.^G/؊@Ļ*nFV5ڻC 0uwj:p$hc:5S+]GZA[ IĜ{vX0,qP^^<q[kѣ|!]~Am ^(]gOiBuF23׮]ƭh)g&]MD A.ub69$ IheR$E1汾`cJA)e,4mV JJ֊ Gw.;@\̯dI0@?'_o4'O(bv=5?iu?aD?"Ih;e8T nG[m\F,ɏ)JÉI B T9.c;?9 y+*3=. ؽm`7X3bvE@3tH HF{dF1(>7_6BN`[lM)Y/|VLѫ(a+A gOῃi]ݽJu& Wv÷.Kr0erqr^"H)Dž&yS4kQĠL 0<>A=n猆4 @1yH_JR U T&<BP< +QԪu*T[*Vu|KPrsj; D/Ix7<41p̱G`2@Hd#,c 6]fDfBH [^ ͂zg^حh6hcE='yG͞YFIVj^1JÙAw}&FRX*c #(ϞkMX@CqTP#D0XxFPH))MCrMe^حB/ua鲛jU읥ܕ !TϊO$ӧ?L8g}uM1](ǑKIG{K@~q: ܹܺ>pR« GRYr;̋762̎ǞCỤn$zHfQz FhQ aDۛZZK9g6t΅ V c/Ə-j#o%GH |D]=?-q ԩ$ǥ [i dH"ȲE[32,g] &{7[)%|}Sc[((ގMmCXROZ]%Q6eZ1}?4P'*՛k{E׮r9[H<+'˥jb3Z vy. Чp`U}=xV]ʷ@#/Cd*Ki70/ֶ%&{Ry5SHj`?=xRcpؔi\0vآ6I 6#F )}A `Xy7EoȟI1Άٰ1z`y~]QL7ZG'$zϜRb>yF7cGZNcZ_a_hN>ɟ(Op4,0o?i8] inr[-n:Y$ ް R“#׿?ef' ؟~KNn%R%9L?pIQ69E$j[R1Zp_`V󝬯S)ԭjWŰ{/{99ejr}15MZFYR6ٖ1)8I5}xIܣcl  m3n}b+T`VZsUQ[RR(:܄;95|z-Q{^`4Mr|h'{?7J.4jqG[R`%zY0N?˓>kII=ҝ%GfހNv7rEbAy׬`esKWw2naNaGt Qy'mi^lFJO+w2Hi#9dO&DeL-p\x3m|e#riI&!2gZ%G(1Q`l¾JK'gk;⇞e Bu-l={ih4IYQ &/0o1zkMHMoye !o uOEG[y%yf&_] T~ԵՉM *@sOtPbUA /,)ry}8~' (f3(610c Vn&y}yxl>B\GD ( H`&R1O%>ecd)-Y IR ?RmZ\7x}+dI`!%$ DTHT8λQ96&yÈ:㏯%Z Q7qX.BB-F!/ILb (8B%إnEP< ()I<%?K8 $ oo>&A"!0 HFR!]Xp&7J-J6;rA-F e),Pҷb!b&ͨ;.M~.['F?J5M#BGA')('4HBxk,CNJ wvcE+*Tr;;A0Hhz25{yӣU$)G * PP=d13fLZQZvZ@"؂DxN$60H(0bi5*&ڲ +cj`{&/l43J6_^/m=1LwH}$Іd,h,HVYE)4_[R*m;($߿aoaJ\\2jT< Tss̠(( p )} Pv%7RGR@L@a-%]>S;F70 ,8U @<ҦWLb%d BE E u z )*#ri"dDʷr:_GoM_950.nZARHX ^M9&Sl@:1CBxT1[t;"LDp!dGLrh;N^?E57fUxFp9b!tLUK~;rAX8F<-K8*xBm%#ڳ:ʏ ' CS\U=,Gxg<r˿[K 0͌흢QhSFm7eK! Eh?gyPȺu}j)z>8`E0UPH$fOw6(mڷDs"F;˼]\H$xWK5?NkukVhP񽽁~j^N.64nZVZa@4I`d^ihu1p?uƐ6a3W.J)ۙ^9-Sobu*mҶ[un"ju 20 ޝ k€]#o/|u ؤ>ZcPL&I!&qE;n& |Ba]Y埦-@$pY,y6 8hp|v5W| x"bރNזd%|ɠ-,˨Z'cR:]iiacdpֿ-[X6 j1Z=Wt6̶>]fc>-}N_2̟{Om:5aflG/i\/{⧞;ꋁn1hgۙ)VNgz+/xfhZh+ ZrHpBEHtS8QRnW勢`a;=4@õ(?ZPIHDa$ FT c}F}4{xnU9%81p &l Eq6h+ `nIRR(Ic) +vy™*j)|)>("bzA4e( CD탳?TNdle]'M%)PdwjDh`Ep2rH D`R" !zE۷Bۋ[p2x돐<eh≷EbÉC5bAH] /܂[`+5#MuFQ4R⣹r*Sy)}¼]O4dU'NM,+Cp nQ*%$gVټd0' d>ͽ9g]ǛZ,m,|K.mWt$XoXN$/lbLGG4Ojpwt݈jԷq٬ytˈ>cdXOWp(P*a&e(d PLwOJƆ7-۵;-D`nraJ\@7/#k|x*$_Zr膜 9]KCh>D?|,OxPbRy:F]tጓaz ٣p K2^6pe8Q*fJRkE!xv(KR(N#[@ue3GOy",|j5Yd<7Vn\_e]__dg!84a{n 8MT)R"&Hy-VWkg^`B~skӸ%oޥL;2E`S(oM%P|Jtot' d\]gi<-Q ,ц?,AQr 2v')\f8O x]b_8&`2-\uNͯڲV^2rPhs**/.CAϺN}]]W?*U@cKF;{%{iT)NU mtYV{.[5_]Ne*]&ko,j ͌?Xz0ǽY8JbK%[wd_,vFfރdw9i{hjJnA"%!lp3c_QP o~3IoI:@sdzyr2t.*]HКAGq bA`5< .fV$!}uc7HDݧKI-ÕsnՍ(ܣx=XqnӡKY@Ðڭnxb!Mu#LM6@<{RR3'!rt z}N__sN| %2׎OZmZگd<B o#Kvވ=]YMŲ@'Mhw545X䨯s̔kԷQ~ ~ z;\8f7nwYSBQNӨ-`Ἡ^;Tr0 d|#mR剔K/\~=Wi).sQܔ$G]BSHC0(}fBen$Rk*n̟uq(ZvjCF(3k`NP6aFF&܎L="EI0N퉸*y>^" o%/L\~p^oW`$e ,NQS^ ~"ނC~6@@YLдK0TKt¦@F[B 1DV'`/4gvCi}T)-&%NjL75NLkv~Hso9A^&9f d\ Sx?ɰxNse dtcf%X0]$h ӹ=&!e@XmJJT2J6X q@A}k]u z9q,!+%ltmߣ0Npp R!R!E:+Mcpugy"*-] --'Ut ўȊPԦY ՙ˱VG奷K8+:->bԣ3e+^Ѫұ" cMEdGȸ*L;Nr6x[t8XWVT]N߽vMHySinXyc|}_#8\Ws`; E@}X^ny2&[}ߺ2V]{j[ڀ[*p陁ѓo;PkEFNNj]:[cW >k*\V.O4vMfփ3G@I )zc}m!_0/᯿0Zޟk{=p?M^^߶3dI/Yc䒷߉Zk~<^)7ݷM>Uf622Q,~`;z^l|{ӳNoog o[-:p8[mu%!|s|R{ībv `CWvtE;2\EP(OszN~Mn]g՟ "[%t\^^,ŎZ=|/7"!p.˴sS ʦhI ue]C=k$PoLzrAa8C[ޘH\nKUTd "#(x_Oi_?}xٯ{ Q6~BR*)8OWᩥ dne11us5^ǕyN0umA`תpwRhE<8ʷ кMldC(KdXKqR F82C~e3n-yJ*B.y#DtH)b\jGb/43` "QʢЀj"DLӡR*'XtNEwgI.@19}UFRznr~9 | 1s XUI(p{|oAidžCa H<ߑu[i`tqn9y\1E Dn`X8ݒ ++9z{A@I# N78I.c%yf,\H X^s\x*n':w:oop_βZ !(#o(6\PA<_Q:%y9Y2NRpV :$_K ػ6W$2tCuyاוmY+KZ辶%jHZ v!K$.j*NW"~G=qN_[WzoPwu杖e?Ro5+ң>Z[+|Zzb_"T`5<5x6gtž8ޡ\:7xv!6|ܶ]7ǪTyt4| rvC2:j$'[IW"B[5{[A!hksC0kh\mQT3SHP@#¯rkJi|#J;g`OQf̯76% o2v~;ϯVUy%"ÃQ_'9X| (zOgGj9?OVA*+-zw#.Hl-^~A>}Ԙ\fWTR̅-VB66UǠR-"VbL!e ;hoxA7 L%xOrPr<%%F25}J`܀ɓ(0<02 kh|:}WObU )@]15W]_CU,SEQ]}Yj&M= HuM#ޭU#?OٷZLK (G:>|N0ȜRꯖS͈P 7j<NBoE QZEpaٺ3rUV%6S M2S_ԝAExԬC}0!E)CQyIv&!L=a+&c 41yXX=#-Ғ+\q,Q"dEoo$NћG$ݳ0}&QoTD0kƧ$ USXy< X\|o\1-]랮lKv%9ݻVQ}Y%6jB2>Ms@)c7|<~dcѮ}i7;I96i8{p`v|̷iهO>})'mpvf%wㅁW?!'9Ҵ[:B]l|:O&;ռM8LHf{Y~9eRoǤAc4諭vPD!̘Z54W]:xu܎1+۹?|< 8f {:j8;oۜF]N=XG|)nk蕖/{K)w&- CGB C92a?./Bo+̞Y|~+ST;aP={)Ow~V>7拭mi(AGL@܃E*['k׷xAkQ+vbv=}GAxtb f׷+6j54M6Kx7I83h ';h-n΁ޓh~WX u Dn΁sob<Σ怫؝?ޯ؜%5m@Ї44~[gT<`T9-EIl\#q6V)P=Ě >Z׼Xग़)rQ.FWM%4o*˷GL!^Zȵ}iqqx0L]i)CS̭ z9à6Nn _v:PoC8;t^y8Zv"®~o9mʒ@oVNZZ%܆t~'`_fiWǹrI" Ô$mGܶl/0v^BCSp$߻Ev__ %AJJ>~)/Ƭo O3A\ѭt} AWLV9A|N_Xk6U?_^]V.˽ūW-T|Co7{/z$G?W_[}gM^c#Ç )ۓ%K +CVW?pI ljN,IfoS{o0,KDT$A@a{&ᪧ.%{_u/ :mw jW.!YT^j7JTKUp+mIb,XjJ(5E7Q7r2&\M*[՗7_P*Ct ޚTqsCT(98r)$1cV0UӭQsɡxVT)fNڜ=iml'DP 2UR]h(8ѝ2J:FCQY1JyTT5zPr4tVn%NK1c[BU5jVvXB_QE0-(# VnImv!Y* FʫO"QA|Rd|RA@XYgQϫ&YՂ')(, It*DZ n-o~K<[ax ޫNc8 vMuB]6k [T͂1/ԧp6jӚ؀ p("VDAYɐ:!2Mپ3:DRӓSJ7xܨ|A[zc*[+֎RMʔ6vM~36ۂ"XX=`KSgcm1xqG Tg pDnp85P ޫQgPnJU-ZQ1vQRK@N*">;orDEui_㷮H76t#RTER+]R?bSt6 :@]*6ZWKEU!VR8SoV$l=(ISUk0IGEK.M\ksmR*^rQVL$K1Tr*lW^%4ֲ96*ISO޺xԩY`^ꂸm2Ԃf-`Mr-m5u=!FXb]U$?5p*I1"S؟`E} VX+ajRkW'˪t8u 7ݰDuqɡ_>6ERHFTptL%4I:}0A:] ejLƄ692êtD];9!@=XB@֪Q))TVM2%1ThJhtL%~]ٿV1yU=^'^KL1+Ff#c0F#iy{%o=E89͗{!"@ gnn?ERcԸS ÌW:^?':/ w:(f.ǝwTq7T)+a H vxap[a ?ީT<`O#xG :]1;scP uU_3Cv^7*WBνghw~CCavOPWީ8( u"yyL. vv)WjN|/sv7;A >'hǞmHȷu\!nk;ooO C؛jaW{ƅ/bH|mW6}V&XRjQ|Us`Eԍ0LTGרL%4Fe*2:b0Ͷ8/C ;y}ºx!ڳn Jz#"tGu x~ vu8>MQY9+mўknkT9no!L|wJl3]=?9٭8 hݤGD[NcU ӂp~vx O^i+7 ԛ,~>'ShK#u CָX$8k<;h9ٛ`0dT.Āusi;"7j0x+9!eL5 LdhG 3Dhj-6j`TT0:<Ф`T~,vQ;^[P _|pہŽGKP[4gp@m&5> U$hj;0 3;Ve6QGO/UF TB|* x@f==RixN+gv8|y |gg1;WEѼdl6f5PblweWWEFFAlkV`a?zW/,=_Xy2^u娷[zxreK>x)aymgmԦQI:jGx}UkVϪA%sjd}5d4($YsETsTT\lI~|թ?4Oݕ[N~|jt-w݌ 9eƖlu!瘀]HAys{\Q6ǹ ZB*>xGRnsaAưo\=˺ ՖlU#BV)VYZX[S1srɔH|PJȹP/Np!Ew1y%~7ݰN VXWtw$"Įz UXs\6̜|Tkc$Zsk9l`X5ܠoMXFſ>];ĨڶƩ( )*Df935t+iwdTL#ݝDUSҠnJ1e pް{dX\Iw$Xۂ!_RcH$pZɜ DYRb"29&[ty,m%7>J^1o9tv VVO5d-m9:Ϡ UJ&Co{3)qNZ+hm1 eguPt>'ð`ܼk1Zl_E3 ^acvkWet[/C?Wr&G??n(IG6?<>Γ)8CkqsS>|S*i{㠽.QMze^&VƠة YSPle㩁rzoȔ nޑEX^Y$A5)]^捹)zL;e)Sv3S{KNLouaꛁss9pOAefћ5ެYfbYNmolVd&%-ޥGm $s&~BO[`1%k:t\dYg`͒5Sd';rPV\vjHy(}Z 4c:d:]Ƙ/wXoJ._b$+~,]H+U#(vA) g^Nբeluz#kJuF$Ln]"0_`foj1^07w'`/nw}ux*L]DŽX\~/OD<9[UQ.{t l4vN=yVLQ] uzI8@^&e."ϋxKp7R <2I,WjZN@$㡋³Kf8tљ@"dSgxa|s5ҔQ) z,M2 =ߴ5G /iS3\IL4UR[<21k)hN~t$"m߂/&Hs#Qqhs,j e q0-+[F CajiY8IQ{Sjm9hh.ajհ&Cj. ~t)L%@|z /^X%ب*}i ,D*TUm4I}q^`] ŲQ^#E Z/4J5vϚѺ9 BK^-FSk-A@dkN?0@@kA$ h1 ZX|>}t\\G\#\uQ)Z"؍ 2EajS?IJ5*c#u CŃ*' Kz`gqc(:B.8_/_72Jy]np,MFoXI Xh$5IE ICle*IwкQYy/OU>U$)|! 斤D%s?Msaճ̦3dƪWXxԜ4=`}Gۤ7dR#wN"Pea)-(3lG9zY \[ \[#;]kD 8*I2\mRn'o|s T\ > V lU}8Ơ#WӯK Ō":XVxWD1DA Z ynʑ)8(ݣ >iDO#z}Zf]Հ1hW/ovD;+'l[-рk` o๣y3$ŬkM2r-V&ٴ:2(\NG[+y۾3mEAr]Ջཱི:[>Dl0|xA ]bJD.RF/*QKq㎓5kdUhQJKz\6!Sw.+,_&U2_vێis;%3` @xHJX R9U|s4#:ZQemɾXL+1Q'[#ipYrI#<#IE+$*59K: p f$)7*YB*YY3T[g<?.JPM_.|ƺDKXY'm"غ+֙f?N2q"'ˢp,Ze#W@XNOܡEk1zn59[reX\s);ao7dt*&w4<|Rƭ8Hy~ RZ%n *֒!ղT @Ez4`M7dߡ$HxZo\͢dM|͌集g7<Qgb٨[Mpz1q 3#2HftM2.}IgطθV+vh>Z_7e'?eOe0vTNA;4,WOFSϲI iε a?r2ڹpףR&w?s}=-W‰ ~wn.^ LlKOݟ~^B3q~uz/ӫ˧Ru4].Z2|_ϑڳ!;5Rv$iI;q}:f>&[׋i@aPstLB4`AiWj# Up䏚OfZUU@Yխe96Bp?YRa%7 Vm99富=K_qxcGt\s_oHJZ)1- ^4~tyIs`ߧE,Fɞ`K 7\p7rȋGWOWw72z//}vpc`cYZ-&,ce懫\Ub畔oVs/.ky"nw>.e5xٮ2N``J/;/ߏ 5@׻SK"oy>\%+7S%;,,5Hl۹L-[aniǫE=ӓxlsL0}FJm<<*uO̮7.z'܌fxiP~rG'Mn%9p@J D]Ԧ]٦txIf̷t3+ #Kd죕Vʰ~U?)yH;y@n|3Jeށbi@# w'Ckcb?G@I6Xg@p>7G@Ȱ}W5oVHIkl^L_?xvdP3[7s Xwlܚݰ}Sj$ Nhi4d^Y3;l*̙q3{O#D3=1LH90?6{' Ḯ=-J镓KzoW1^‚>bc@@.d/z{Tl)!bT,4 ة6OT*<,)~p/v̷T`~x|RJ=q%oqkRsvH^oخIcMX 'k}赮m$9pR*V=SdwMҤg!tX;7`[[ZB*䃹c^!@sl 7_{ r"d ׫O6tc%!nܢ sΛ̜T v)r6nvͺ0Cl^SxyF}3{y{@;[7A*o;س 4Cayes@;<%hy2^!gԖVvl5^j.%\P)C1ϨtBkHŅl/Z-y>m1-M\kGưҳRgI'{6?m.w#I8\.QǏ%&*+L4cJ|1f[V2BuJ5syd=M1wC'ekNżݖR)LN Aϒ2D}y5l=߃U;$bfZTrV{d,,]-.u=HƃSkwQy_=_N5 +6#ጷ`l F+ v7I ` ys)`KHN}N49,E-ZT9%(?RUKQ"}k|"!/ >spـ;ՋH~ ơe/jW=,m:{qQ>jV.n_sP݋~әӘ Ͳm`@gz(!ߖczO_rHG\(I2xi IT5S(u)I[.i D%y)Ѳ `jA9nwDICzOmO.c%n䋋$f tI;R/ΑSA%%sZX:Z|O`23G6gw ^ :hU!k&uG(f^z]RJ1q Se ޣ`>uL3= sdz?^`|.׺tJ/A. @Ўk^9!yTM!p(\jを.R c{s7\R=#=qxd1@mܹ t˳`=ȡ?ٗ>x`XrQ/~-*^24㹁"&MBɱNe`*gZE \.7܊b_걹eDݒzMW뽲6$ƌ!J(@ytE:^Ӧ$')1 r?Y9;kE|`[ѳ'}xn5]q2_r7P 7s̝2p_լsF)*ع% Sys܉{߰^5PǛw_m|0vٝi@ܼɢLܹeci%(K, *mGzDPXK(FBx]sƱt: Cc;D뗼SUkm$0X5uakz#,=a[ܡwU/rZ O$حM8Se{Y!5Z&Ќ55sv] L0"Ib^[%9dTR]jT12TGb~iQP_?os,PURk vS*6 $`=VC_!]֪K}!ݏR qP:TvhQzV'K߾xх#D_Dg7oaKC 񝬑VT]͸VVupUrϔpM2\Hѱ/t]4_T;Z[lfKz YEͧPmt`BlqR#|RSV{&+QAAؘ@lJ'IbV"FJroIB7vyTT4!QN׆XFY5Cg1UiPX\Β\ԋ *&"exywj3hqF^e͍\;qƧLjdS'5*U[/*Fc{ .bhOe4޹ZkD)x~3ܓs-# +Qbr>}*F PܬdZT)UCU[4_O d Nwi;p\ԱϾF4걘o XgZ&GGu/yfϓy?~ʏûY5jdbܯ^u{K#EśEP·kGՌ_}"LjoY/;tn X|Ω䟯J_\~X]"=`9lԭNHuvE qN=CU~gjS:P7a|PZ+ %Md$RR{l2%,Bڳb{ez]lk "R5jf@ O 6x hE27Ǚr9`2`RX]@sF(꟏o=oNӸ)?/ʶ鹫zgjÃ$0d cGyU,YJl'&qR ||m芷H@xuncJoOey&,Ƙ,7paU79qM1g;|;}t:V,}hM]r9n޿6 }})R$zEaQ)oW{L!Jjx ^yS2_4j誅%4PK6B$V׌N-[X`P[:u͢lSap{爨MP-dnMJMR,'|/ Sh XūO-_eXsjؓ0YO'J>켼}e8+TR(\珟iYl&BjcH'ob׸9s;7[騒]Krʺ6i*ĘF`))'`[,e#hUVCƛ1?2xO> Q<o%9qM&o@fv浏\~9~9cqwrܮ8`EFwrfdA'i}~r=n7e)>kφxrdd|:{}2֝~Hs#Т!ȹҸ\h󮁯:U>*i=|&ez.d,զWz!-ш=~ 76xI1wQWb>(= b|pyoM|8sjhHO0Mު`J0_,||ڎƦx*d`b4"ݬU9E@f̵D,y[z EJ,a ,D<3߾lZTv#:k/JՄY.W} mKBPbeK8JX!)SF^"TtBrI_^4 zҖNΩWG^XE,"X9QSU]$O'>>P fa[5fݑDфW]AOUN5=vvf^͙-ɉ?{#Ǎ`HƓ=anc%%kԲeoՏꪬRUgu[nò__ҵE3.k<^U:4wfO$X `~|{G܉ض'JSg [S 1=WSR45 fR+"N\ `M-LV5h:|"tw_ob>~q5p{_?zӐDwt.8wA#6eܞ|mȏ=yIcĤ1&3 !8yAt'9 n }? 5PsRi<P[!K8eu}oI |iV]Y:r䦄Y>5C9.գyYʽv+} =df.&[6R,Mb$FI&P E;#_Cݫ9is:Zl`=u6X1\l3QO ݉qwN‚+5"Qs1ČalCFo&挔H//SŗJ!jVLhcv'z\Ŗ4 Q|t9"ٴYqYSG-9vSͼ ]#&iٷ·|߼Nιz@F=jO9I.cvn\U $3k׆ mT+FmZ.̬%n@z-m%X!·c%W xVM Yp'Wn Vphs`2{΍u=@+Q!, (thHrfN4,)NFtO8 dS?7/5}m*|O<*07X1U(vq7#LUizo3",7ly-?![X/2rNndBI_am{t27\uJL1T辆 >]ЈsܻAplJea18dglJcu%/AjeF̨B1@O 4 {t[s|Y޽.,/tqMt+|s=2r02MFS_i;,]tJ$9+r03MV3 0 $bazG{{L ݗjQ!o0a,3`ؽ8<Զ0k3_>u7y[ʄlp+$57!'G$cJin)X}mQBu'ܘm ;!W4f_ؾ`ތ T>#]l37?M~7Mit~"݈Q6B)TVn.)sj5;kO[BalR:8l'Z-~cWK.y18%6XMHT9͍ZɃh5f"䨅fFIW"؉ԡ"ZAAN?kJ+n,!1ńuۯvVc0u0vH@SC_ȕfQ{^vq{c߲8Rjݍ|XkZdʩt`ʒlh٣\=T\lϖjn$衶0cMl襣ќ6̜Q Kn^=]`'&b&>ao?m0qFɎSS4934UHٟKs 2ȍvVR.|ܪب*>w,:huZL?EH7_yfzdA<˾[t= 84"S60ef!ZN콹;{YO|H֝G`wRS)kӚi -ꡚOcd(ȫOI7_ͧf޲g~h?9W}woQUG!]7y}7hRxR+DMhDC:0W 'Jȭ9ߛ\[0LxxX"ď,!p}0 Qp3Эևsm4yB70ȁ;gx;4` fg}'"\YNs2QcA>z{$]EAd]y62`-nAВ;Y7uY$>>jW%7-(P1\4Z;RZ;\@+x_|.[4C|=71[)4N֍纱QrR,)̍6wgjm|DS3m|9NНg 9,AF5F^#|Si#]0kFx@ȡ~ԌXO[]0{̀c=q$-8?s vObsd[{n}{ҏՊ0y;ِ)#ŠSQu'ə9A"T1=s=I5r<_O!ݎ{> iTa.,@RN:^a#eRsGjUTp'\f>qANzs3Ob\gF|'W~#LMHx$g2K%l%⼪\ˏaqPI6_lp~9y?{=ڂ|zb a.5yWNx~gF"]Sjxلt2 27>|F~~s~7׫YWzTJF"(I8kn{i'_(==[#eh?}뒗F 1ݫ7<=+:Q pbҦ$gDFZ*WyaGz涍9҅#y7Å)ۻ0;^1\|*̶C1jM*EfgU7]M]nt81G<^:d$dJrwN<YU7=!(t!݌y><0$~fc@>5iS3sQp)BKzYjv̳Ya''N3s8K@JkIzh?mо[g aBkh P}#+JB6ڿtF /2U~Jζ6*FN틽#':OڭnzR +J(ˎRj.Tb梸JVj7S;;ξ '#%nvTU/AĭLka LV 4^8Qve+ot 2ՙ(a-1ё'm2-6t8DW&z@1pkxJ֍;4CuJ ֍҉2ܳU$x}r1`'j1IZ룈3K* vf>A:ؾM7LqĢf$$D{(JUmvrгJpTCFjb*M$՞T6[NzviM9%Kt%"v9v2 jIН)d8mIG6юM{=EBZ\95Tto[BvwOfFVIx (zb zk)aR쎩>ęD2~D6A`l4%B:}V0会r &%$3Q2XbCI{q5K c-T3Kit-/,zkk 6x 6'|8q|1ٌTk%50" mi×wEH1o^婛.7 c]ŀQQ9Z8%J~UǙ>F4GA-#ARW$c$+1qBs'WK3*\0ӯg"2f-Lq|%CR/rgފU$U}X+PE1C$T' 0a0Ǚ0)!1%%i"E׏T%ph#ujygFn0y#n:}V-YX@pCg6 k_ D:Z .AFGL&L#NREtlyI TmQ\{7iB 0u~+j>D82\   ?1!MBIuԨ OY@P?A)h`l!d:Ecdo ) T'ȕTڝ !@<;,Sq}%9Dю!jec7ɮ @Ō|@_(*df̿:旽b2E$H p![v T~8obL*JCuƁWg(ΥTAb*iS" 3Burѧomra{ Ӝ  opDYn j})yo1Ȳ,so̭9#t4aMN}b)\y}ЃMizQo%&<,SkQ oOYb۝ÛM&z84 cO422@GqXpZQ5QD %Kw|.]}ɇ$\Sr\)/5XbNh0Ow<>M`[(ۦOG/N[ D0 V[Dp"ʊ18FŗcL ft0>*|:{bFS!sJg%F CK`F/oi f44HDRC7Hc2H5ei@B º}U򁞟h|qI> [4=k{>Xg¯g<=ZTnft f+ek@[EEpŪ.3Z1 FݧId}É$dx=%c0}c|!|2wݤ{ H$V| ]߹YCW?Ԍw vKnH-ݷlDB ҆(h]>iC9͌4øEFhJ\@(䌪%|eL!|uvugE`oY1k{JUM*ߝg9ђ=:(xH@F|jE,['drVGrD}n[kyCBh鳟HLvuk nCJ2#(Xf{(+4SaOPP̒SX!ZTd0p47=C9Cxxxōw/y0iӖO},Mӈ)i#CxJע-_;U]u64'3B>ﺓ-;G]p?; ي鑭՞ZA}wwqU?| 'OxO~1GXلZtԪ\XNtSK"ɑZH;L;$R1DZ.%Wi WzP].Z&vRkn 6k4n`3"hׯgڗpck, EۇV]&[tS&j%{%xU޾MnE`1i?~aJM> {;Q^R>At]S~8ET 9ԷzWeÅ^FmT.uWo=*Q9N/Xiwbrus9[#n].hwP%ٞ&/D%3v ~q<_K75"A wK:DDl:ٸxuz45ow6+ڀ< ѕȡA]i36IUW i.FV7%Xo+!"]ь(ɥ~MNGVyuQx(QLz|p`g".KR4\YH \rB„0#Fۖ|nѼ*kDVDj[UH@)dRyC{$2f?<ŅP>( |s0n$OX|, >>?;à :aWW_|^ڿo2 Z,F (&H+5CZe9u"uo2:L7 8R-8-{F{<7nÛN89#=#S|'fbƨ /ĉ?'&N0sR1Jj?Ȅ!%cU*8wX1|f~-*X|7l֢Km-+u}q"S :EK,(9=Ԡ/} )ZT헿ERH Mjq(c0hHD4ajBȅLsx#p̜ X |f!XRm"N`c *f:b,iѸhdצEИnR) ւ[ L"c8(:0 dcP )9F5/tqe]K]BSƧb pXe7f/)ZcjOb;)M 'R`*,U7))fxԒ)(tn ʏ/{/Nhˬ oݹAS{?ٟ̍i>z_qGc;Df0͆(G,'C4Јqй#ŖpʕbpYt;'2I6D~ˇy RpZ)x)7 W1)#%T;)MK\$;ǩL(XiJU V\Όna1E-cp[NW-Pn jj8LfЛ؇y19r -Ś= #Z5U"3N炰Xq1 @ ǖ[^-ꇜ!Nj0އy*pa`13 rc_S Vޜl~c6q7hOܫč-Iuә1<dwQB&3?<qބ=M su||F }\|$'W9EymN)lޟEMqPG6cCY4C\5~r,1OiR\ H>۽|={9,=d0MWSKYZ?(T\Pw̒ XkJ0U$aI(NMjGL{܉ơ6s/k3EM.#Rj39@,2/d Bͬ/0Ҝ4Q_94=sb]S9㞮!0VVwdν{{qʷ+VMSk.y)Gwn˭DftDEf_ΫO8ÇG͏?Y<6ON^<M?Fl4 KB?=㙟.ޮit6Pζ8Me_݂=L>GItFT m(ʀsHE)f1KsRtxv_IPDWyh^eW5]Hd/m 73~`,KO?H]H$>8`K?!iBfTB}j^[kI1QxI:J7X ]dk)À'#ϥ cVM er`kKbmhհHHP4e![2l(D@3 ْ-pIvS 6:OTX$Jrwqx۬lzJqo`;Exwi>,DƋ2ƷF441Yc#h/Z1(>~$œKH#"r%y$3p޵S|, S3;Fwf?^)0B.d^}kKБd܌?_0HgFKy}j{\]iWп.5s^v9*{ % `E"E3"`'x> jƹܷF8XO=1< dT]"m:BUBe>D-a 8*Q'!$B+BR9kqGL$d-dۻ{4,)ޞrvWeDPO;<ԹF,u*2%FV+\b,ZQXgq Zݖ"Qɽ%e,B3=.eg>Dk9!TS֒vk]98Uþ, +!lSg]h'svMѽz2CRC/I;9F]cEP kO}8I>bO1Wt[ēi藔K-wf@lSB+9g  ? _n{ {'$T"aS%( GJ[b*MBAxAɌl'Y'Mnenk |"GRFT#1C(".(68Z|'hњ6A&fCc=HLz< ?ﯳ C;EϜ|dkƾ$c}8Gp <{0q㾷$#2d_U7HOkl"B({raȡ5 yygqX1UvFG\ yJ+'7Ol Q v&R( Ă3-(o5p}6+jf{ri}i:Z>pH~W!N! g2d<dTSĘ1M;$'44e D v%* ΂@}A}}A﫫/}^hl0;T #+?ݦ~?e쮍DO?$ceUϐ҈sȞdHL?"< ]$`璸>p_+Ӝ^6ˇ~Vh.j:X`Ș!z'`!"x#x "T .spl40jZVQ#R ѥ,U-_y3~K02a__g?{6 y{m19(栄j!TyR{oM=B|+z"i[)+:|SʯƹX>y46 u NѸ,%=_7/)όa1 {;6ÉLy#pvAAN-7:\ai(z<8XWpgF SAilpݳ@C(qpKr{OiF?TKNpR.Ry:.׉ M.ۿ^sʱ3`߿^.MHO"u碷r lڹ?'?wv:y;9E4ucK"`a,`kT*\qMC$8D,0c ۫1_TYUH^R1"% QGLj;)T#θҒ4&1KaE0y +E[ZY,@-7:e>Τ2nD$&^7iGu7(.(/lZكc}4EK(i 2HZ ! ι1t4RC>1̟,-M"{dut mNKr(\~; 'w̟%xЏa ]\拧YjH;uaT3 C5i%6 $ 3fAD)o/#Yg2^Ͻ Q~2O 3O $e=b-,"\0,L=|C'!;7ڻUS^Ql? ,4[o~߽嵽70yw݇Yx~X["̥w>5v!1 AMNfܕw'pB&VvG#Jٺ6@݌iIm"\IZ)I?S|PùDRahʙW4k;{{O2.:pU}x,wk-׮U::կsaR5g0Q4{={;)}Ǔ }IeZ/z#%Ӻh+In.RWt{diMr+І>*3PEt$RMιh6ڪkgW7=ky/wuU?c) L$3k{PiJ}843VHLĈ1~5܍;Rz9A^ u55hO%)[O?JR$SSI"1gG'w4NfĜm)'Zʈ(fiqE8{0%Ti]3hi۔цvٚ/gA$^ J XFY01&JN<ް"J|H 6DKj~|cAeS?Y4_& WnRσI{H%ҔYf\[mX 8#39d4s؈vUTywU]]P*G;S/v_J)|%jI58xQX/ђBŘk bʰqF8Ox*_pu(:BM RT10) +D O1'&Yطф ԰qqPkЦ2.Vhq"E@hqR`e1RzςSp|G m .xl VZF/ILjօIoY +tJ#!\.X 3 [EFFo+Ҹ8EC_}QwIs}[FUcJV<^IhM_$ W17>}6w<~LFD`+ Vq:DO) 2׸-Nc'fV8ϣ'%+6ګa/7[r'Da}([W+I&=ugy9"U(m'j lAi4ycn &hPp=A}ͤVEThŅЊ灯 2ws)2Մv[-\j#5/{h(R _v^URo`@ a`jS9d}%;@^-Ih9:Z-Y)$?KGKw24Y&wÖ52I(J\B'"Tx<5^U6![IeВnHwNL5Pl@knkV{s ƜYZIYڦms@an±\$b*T%.EݞnnT%󑜾-3+qEzLljO!/ч߆t5^;a/̊؛ݧR~3gS7 (%KLki܆ZmG-r)r)_!߆K-bd^PDnwZIj֦H 7E8:C P7F-( W5hI۲֬5s:Eِ0WVC6܍oRsby©h ʺˡ7n_.doL0cpYE /z8g(c^^Sm#P1cFX$SBx?%SBx^J~iSYElTj%Lwb,"*UeV6BnGY6Bn Z vH^"3f NTfm8.ް9LZK.]5xH !%GL&uXxp`SmФ S6 ل&|xq["'/tfY7# nц>JlEt-F{2F,zh8TP sX;޷iFvHJl`E&`i۩k^i~ThgFvNqȩa6[Mb_mQM)o)D\e}F\\ʈ>2)ŲqSPuG!úfO͊) 5SEkl_ ~'~!,/r0؅X[Xt:%딪-Q2N4~v|,aq#/yb}ȋ;0Kră;L 2z/I䮈hC,[j\k֚g,}(\cLKIiX+ѴC:Fva>VbCC[ofwI, v!%w$?b[骠1͜퀹mF~%,f-ֿ0׭W5:mʐ’<._4jI*p++[҉=psP0;8Eht)?s'qJn'HPBQǏvlfT\&v]ls`J$ l\% aVB==Qc\W^@r1'Qwݲ~53ΑJlD⥶3igeuqfc<v6Z1c'TQC5<'Q3΅Ng9?~j^iI=?[N|1Ew\smZ9sr0>{Q9Gv6Sg?Ƕvydv5wnܭK*nrvJRHT 6ONFc_v[| zꍝ]GI>14)q(&Q`YQaֲ}e:oU66pՄUe6N~xŵm:kfC t64c cU f"G9璛[T+5n)̃+^y(pˆ PdTg᜖{La l=i XmYG)uaY( SNZV{pu;h'gAq6,1I gs&O1^n0\I ׆Le'VJG4cz Fݼè9kƿٽ۰F_ߤ?4'3J9;7xLv0 b^mI`Z:Ly_O ;vT8knDt敹Pz"*<kquSht|3 ,OwT:?{Ƒ@/ź_yٗB_e)J"}9$ECE[IptWUpR0"Ox S,N;d|ɇ5熿ڨ?{pWw7vF]<<J~gp養nm)PcI$%/*z =k?<s~\cVr+){[̓BFsq:Y'[f]Z*y_ N(rfe՜;C9E LS +L2PʖP\ 3r56;@}R@5rŅ~F0l*" E3!J3^Li-x֭<U$a+yuMy`J1 K62xvm<PW4U:u]XADӨ(;PhI9c^W,"MYՁE+ݷ,:Ez~A)|@7ÝԾF7tkYGPlIEiq̓]V+}m&3=rEmd6kGF(,K\?Ha9(n!lЌm=F@v!RmpuBL?Bsn WF"%?."(/}/-N(=!]F>!ED<(Fi6Tf๡`\o.qQ λ k9JI6-xS9ŊpC DQ"[9)~>96t2ނ vój~E0K0Cio{w''IQ>(q blHbc,(Mʹ:ys%~TQysΎ,\*n-=(<ІuQ9aʙ2R)R+k_]dd\XX%n"ZIjk- A$ȝκe脧6oK˶DwQS~%19n^H^`2%Li8 O@l) M=#^YcfH2Jbjl9Lmf,$bޅEy!T5y_c0j+d!-c* g7|v3g7|v3j\ h0﬍N@nPkKQ+MQ\P1g78sv [0ͨB:M6ey$Neޑ;"iilm ۞]~uzhЏ} Ctg{NJDd/ɃEs+d'aȎp 8 i)z0(UI9`ʨ̤pCb."e0`܏r6pjbq>\L %|<>D~rwKu?|*wW+O}pq8[O=y{B134[7e?0~0o >JPSa|^4ӷ?'Fww' 0^|oBJK̾]]i(SC/~gf78Qw K4wdq^jЂ^qG.(VTJӖf0>L.r4H5|yl~ce'tY+Kc-ڔkmՒ)n-13R샬?"E~_"̣)'?@ٲ۟fo ݄0F]mg!Sמ[E)Hk5O޾˵/Mt$Mc4 7Ap)qm?5,>i.w9ȸOTg| owpg'FiRY&JnW1LQd~"Bؑ6;c\}#<>#lW@!-nfד3yN$pO)Z(^~kx'x3dmD5ѻdMp{9.aR>O;-Ҩv\';vʳL{ f vQCO0iG^>@h=Ƿ: N)X0Y'p! 'kK# +PWtB#[0(c[͈[]:ZOaʺYڎ12/.֐a0<St(q.A+feح3yY;V|9=&;QOv!mNscNY;Y<\1ºg2=ш\%{ |<(LC^-M4NM#(2`'f[](sa  ۭ״1S,u?dZ3rsW;Wk8~E.Nqsv6οWgTIG=Y1 3 .0* J`$tsS"mR\>' h+:.|8˿;GUfRm"eO")Х~ixqI{rtVk tX c,>K+jg9HB3 ȜIْ|Z S0:A.׀]pļ+71791O7J_/l弃*"^N=ΡJ0nس z[<<ȁ!=aWNoy/{Acƛ@0on@bW p@흹᧓7'(gl@V(Ug9A?aDqQG"R< a3hl3H ~$$B`@S52긤q;ȆZQnH. Z8HgEAh\BoL,gWW\0Nq8kNظ ۢGhy}s <̈́Tne|+NK!(+,UZvb:.eYuwUWSj}61i|",U]cfm3,Mն=c# [ЏqGl48<J 30DDվiU a|=:С!^|Upb} {jݑGi P̩"n]Y'!RqYOcfϦw8Y8,.e'?!UCYvu-Ey`n1~130ƾ ݌:W"P"=?ˤ8af U8fU 㢖WH 97+&S"[V͹QR}hgD(wKqTbJlN_az.̺^DOpzNTs[c!ul@8 @lwNPxx) iu2b Ib!W7 5D}+!|AWDkȇL1\GUԕ ^;4l!Q J"/>A0B!G:BWDe\JS3qR,^ʸTv'|;:ɬi|(2yE!2bb g,yeJ0ats1% cކƺg3mVXSמi la /abf͇>F{9zxZVATզRxD S\ )5#)+U\n`gkȕb=E #%I9<]փ?WHa 0״4GB;kܝ@hO*o޾#޼}D䐊a̰QG; 3r].AXr坏.ak]~ceY+[9kSUCaXb1*?X5SG3g/L*r [N0Q\ԷMpb0e'6/>+ ݛ@˫ v01+kJؼ^o.l-wnf g핊He̯%ۨ C5eO}B(`S =.̓8OlUx5&\6&)#z]W7W]oC1;,fN_̾%inΙn m9c-mz-M2C?'sF>b#A+}3T)0 Q?m`RnVn>W qlhy_)+owzoyr#\+7]`* 5yhZC 5fd@ʺlF* aRTFHam+Rٴ 2Av re8h l.@oCḀ' „1&2!<3|c.qhiaQ#K($rbVx.*"R8aUkz^/б@m[["^\r nܤ ǵӚڭalEݴw1 ^tnu1;6\ɚ.*:KSt`̠$bL%72d(kfy(Vp+DzP0I,(-(_KPgZS(mLJ5+I9o$]dS K ;2dYB#U;%uh`,4V@ sݗ\b2ZqthQe๛Nn#oittN-s\G EL=ÝۜqBb'D D SMcvT 9XF㉨i ex+0)*>j@$:i~-9:|L, c>qVAQVsETZ;e"0A gmĥmsgREI0pe#ym}3,0fMll7m뽬Mڅϵ= ! ɸeOf7fUqڎdlap%qvI~ӶL)Px.<䓯" lFKeFƜr{ywT/ "ԟٜ9k~k.C$nar 0#W?>z`jw͵nhgocnۖ7o_^ǗI ށn@;UB f?:c_zۍN? 㱽8㢽|pkcc'gӿ>tNuwb^ݾtz=`zNBeo}|"0S)p6ov}bNLl`/y/p/ulM=~4vtoK;sjRzuh~TK~ ~wq6 21ð^CrΧ;NZ&<6ÞE|e wק̚iechT,~ZS 1υeg2g9/edm4JKߌ`j+Ƅ۫;y:b/HjQ4>42bA#3f,>kQx.GFžkLP\ ƶ$UPt C4}=m%ݴd@d'P6f]? m]w|\,?ֵB&U.;|+J[g81mY4jS]̌ve=04 }|̂緭a*|g<2|aKԇ"' 4 dog m鎰D"_{ffGi`E,?.cP Dq[ LQ%7V"ZHS5f\^^VjQ\i-fݕ&؞+\ܽ`+uX'3w{3iT3# )ܘrzޞ;WQ8hb; 2GW+: \HVl-QVԞ6> Bթ K:$1+I%QLM,d~YJ'6(GYZUrnD$P D=f$fRF|pڟE"۟zv g}9 $C:QOO&=E)H/álĬy+,$P Nsk'tm0[((җ &@Yb%`" W~  OYZǮ\`˓T0KwYx3GGrF6](b[`U0Nj0oXH oГ(@Im Wrb";v M/3Qkҿ-M[捚 m vd:o|jQ1nGOMsM.s!cAw0s얶,**D,%wO$&Bcg2`9K\ƂIq7L:Oxn8dUg20^LM3~]k݋4qx]E T;2 _g³y+X2壔$MrwK&r] 9o g~/CŇ{kTednߏ<7 ]pq81吽d62e>㵍 uLҧoF~ MwI~Nhl‡- 18ki)2y?HTVz%u|G= #=J"B $BB`ϸi*'n'a^b5LD Cg̭RyYF8Ң-UR֚l qx]?Lyi|@1K6Ċ3V1d}Y(ݯ<'O"S긌L<ȃ(@(x> *T(ͻ/K Wc8M6XNPe⻞/NB o4$Ae PSqU(BHRrS7B06,iNX0؆2%bKpMPDE]^D5@"7D`E0P Z>g3*ctPEYے.R$/ʖڐ4ޣ8*C 2+"Ȋ+"+rw &$ulmlg$hs|-Ő_B/̄rnB", G`PQ&)} OS‡F[7yTXpFp.9EgTb=')mƯ:"*4~UW5~ "ŕ@ H)kGBDh\"0B5:[rJ8Kl8%XEsu[%[ݖķ>-ؒ-YȖT#ƛj,b$ ֑hJPK%MW[2\԰,cm)9E?iXc‚ü?/b̟L|,l`h<6icX{.u@\Nx2%ǣ=fe__ՇPޝdXfi玉.3^=-ĦؘgBk5f }CJ1$CZ#! H "Bkb2s#/i?[v`R&L2JSR_P`N =QU# @iAD. FDAl+tL#)Ș$ AIFʗ>a Un n VbVd' .%QƒtdG$K$w$wIcI6dcI$7XK1M ԗ xaQpQߋ^%´$,Tvren!㗶g_~;?b2+Ř v;!Kp*V Iqܤ7@ u.iw838~BW P`hTFB6Dr+PdX6Sə;.²! hK+چ ZzUף1-\x*pr݇fƖݵk|땧51&ҝ8jTIocqhx̣fڨCBWk_C 7T ~1&t6Qg, +sg@SGQ} {-R'e1`C,] ѻ,9ŭoLZ.yyań:U5/~7l֭%]"i™`TV"/oy Rc"Yv7@tp☰ܐ{[ne ǒ4ڟ VBoBЛ)&t iv m 8[`a;"VhjYT@.:uaAL8AJi vD ԉ\'$ ";U k>Za\a?ERxjKD4.[,-q s*Pw'J̝I^:g$T5J"ϭ  @&_VzMa0$( c\q  «b>,o%JG`O3K]"V' /1 ͤjKW̩ͫ~w\NG&if`q^_AnvRAWU2S[~MQ)f sFXjek[W+"Ea݅GL\3YmO:,$(Ho_ u0U)/jwx~Mb+;48;l8|Scsi`mCJ.*bt׌"DvE H)Y< S㬑DH) tTL~8[&ng`쭠DP s$Pa5:w8"EcI~x_WCA;6?;3ew@!S0yٵOlչHEp.sy. VRɬ3$%v"^ $ xS^p<0%Nٻ @bqLj4=|ESb4dMS9΃N2Eb s(J[;*J)@uHζՊ+LQ.,NQ0)媐)XD͡}93Wa1CuDsɯ<8=Y08jB^ =fVR48JgB8 YMRBipK?Xq4+<Um x?&_'h< >^-IngGVATk4;F<T ĹRHBp];%Ld Jb85NGN*IEEBT( N?EN )t>?2rU%Yzd%"} Iwސ |i(Ff7hmh!F٫Fz0J3sDdHNy9Q9"MP獻3" ʐQ:O!Rdtܺ_ -IH#\bi=+O_=sM͓IΛ0&ms0 :>NXnuջ`9,nQ5ñScӵv5 t4ǃ Mx:{阇q)q=@✢Qx=5=G6Cngp>ǵgZ|𧙸|dsF9{d @ D >2IaqnDj@TO@֌ .xxZȭmV}G_"$G$Em')EH[2dL) SP `8x4\<2'Z@agLN-KF (t.jYf$<^C.RQ"l: AjE!YS _z]oUf}G4+V v8h cjև:f`­P%פp#ƈ59=x;hډFn;I/qȄ$pIGܹ~nQ{?$ $tT.׵GA@r7#PQ۹ꔐͳ)^ٍ [V[u1wJojn_4HYkw!9&T#tK6N_5 ʝ<6abp?Օ^).tɤW<}ߣD߀yo+eL3=&߃P Ll_JR/+A # +!#!?': v͸Eo"TGx"tC<%9>]J%>kg"jgv&dC77#oϕ~ Bxz$c=zY/fuetU1iM˦*/{0"̋N͏?NM݋=Q$J*"ۋkSXD?y_V50N/gQ4Òu^i[ aR&*GЋKڅaQ̥(Ήk}/xPw^) '&K1q1^,~h%(g@~2Rqڵ?pk)8홀/qQX X ܲg%mfc[ei?f{b)͚AD䤚`q]l6YC7n|73bшJеKx6j*HPoU@=×h(Es5XLƾ@ꆨhc.F 7к޲ބB1*]%u9^<`w9w%bo "&؛]Ɠ\;c)5JXJ!oJE a5thDcFs ?+0 @1ބU˭MYDlY=6ڮ-_U6~2%O"%Z{:npLLrID+T 2i#F`(\uiW~xM4*v gd ׋"Y4e]g' 47K$2{+>YK0GS-=VxQ JB5 ,39!. Z`J I,A$VY{S=8߇2Nd8D3ЎNA/T"O0 ?ȃl bO J8b%!Bߙ>,/ o|tu%] :Rѫ[onlf̗P]"R>)b>^Z5.o\W}rUѾ6l닷!ّZ=վE]It1H"g]Ҥ0 'g~oBc?Ѣ|:=@$*9ovs`x$?X+.x!nƳǂ"m¯0cLYϘ+\#r.B['ɷ޼ <c_z5fCIjs⭷3=td:j22$ʨJ\M5mh:vrGt >hDh%pY|axZ2u/ojH ^9pr`V v-̮*QGs'> ǀKplv]4S KS(S;6o O.p'; =17b$ܻ,g 6Yeaؖ( Pj^{6X1@B`85civI;ì{O-@3g< F:Z|ӖvD㌋jg/u6p Yb)jiB ` Qoc?ޑ^)iCbEs[BgǢ'|d% eX)8Yb[V&!r֠}~fAz F dXI4 Ȍ P3}}1aOS[Gٕ#U dݜrvlӄ%OZ]p.`RKQR<\;bIjzqTGp.YԧѺ}J<3 D¬=AeB9A Zkt~OޗEk6owt e+n7'XMb//UMІUNyE{)f2R`9ӣV\2!uFnqzMhsQ& ): JHY.ԤȥFI$f5˖'g]H>:kR#\6,e" ^W:0X29#tFribbЈ4pI ă‘Wڽ 0t+N% `w4|_1TiNAk&Z 8)sC,"iL ˓) p&d "e!M#F9Be#EUuƂّ"^YK3!scPRHJ&G:&dT YP^1(0˖ ɬ>5֐ ; QN,#@tŮgEd3I4k}#$o6Z~|lN%i\v|?^pA]{,nlȖIC[6!i8iW&=`,rAu 9#V${B`t'&0s]FRHڨ4r18 p! !:~Vi|CL֧F&N*/z d)y$2$&2vY $C.%@@ A9!x% ՗Ͽ_G;~8bysz2DאUGRh^GR;iA*sQ@ ļId[VQ"]gS wGG9FP!( PKa|Ug"!I fFxms wYTH"It(}lJ2*-|&a47e[n[N>[ݷ),̜3*R/GJG:=˝Wi3 #mM7';jϭ{dR  A/xNM<"@'jmһ$eHiKKF8i"%#yZ!9".bBZ﫷aHpҚSN2Р<١|N€݁)%KA9~8ij]|Q$7d=liEhOeC Prf43d YDڲsxFNg Fo!0P iٍHvMOR:spS 0ÝfS}T|"jeht; ~](fXRk%{=$Qqԁ.#,!v-ՂIH ]biZ*a9^0#H˞$H0D[`9\SHXPXc @NdtM?}QY-Xd %Ȇ#VN=%# Bq \'য়YNgzuSJ6ҺRiU+8ԽxKxzA b;l[yp}|ѭ!ք1uXaؤ)GM&ȿ=̄G?ɿδ R BzrpL:2JH[c4^:]PtCO 5ٺ=g!/=(,g>v5r]ۚwz='Vyd;rNWHu;o?g/o/{p7`'6]LKsÇ4?|܋[ZY{mWսO/˘vZڞX>8 X~G!$J_Թ+'8j<8Aw_a A\P 2^O/>^z{٠_ת'&?|~ҿ|ea\zvbB^0Iu3p?&=| 6co?\4CPo Qj$&l2A`L-*:G_8pЧFPFi)P:DՂyq1dLD@y*5Ȓ6} fE4u/cDoջrEĀX6Y #d>8=>;fHRٻ9EZׯ+Hg[I鄩tt*6 ֔1g_=!}`DCuVXQަck8定DzU/k "7Oؘs %cU6:2F^$dvU`L7+e͆h^zwWZquU}z/ *|!+>m=cHߤpMolݚB:rZKTAbv6Pix2@/wg %t޳Vs΃؅-2geѪoVzn'ԻhU7WWX,L1#72u9MLÇ dvVX= 2Xh]=ÅM_oUڰLI>t;ӧgRۖϭ7i[tdoQ )@9ͤi|zhqnm}1鞚aتZ~{]] ~zq._T}898:<]Ӌ[8G RJwR]yTm'H#V z% O  ,D#'hsVp&Р3@J>ecj9$ dP]8YГ|UI|+6ozޗ?n+'[sUna;eoC H _t\p }jZvdZ'`TW8FGڑY=iuk1^KMT!7*x#yccn۰a#Zn0F̿=IԺ0})o[NFH>,N:jq'a%pK=K$%_\ZSVD1V a4ABSvlU&һ&$[_YM7A }6QDP=oWoD;H sg xx,ͪ׿dF;/.d&]&md!h)դwuH6zVw F |(܅pk`t!pZOg2M{>G:^-Obʙ(3C o[iAѬ{o/3]_毯@nb(I&2.FӐd$̻ȴ%ejҺTs *)=&u(NF/KQkJqCv>4qۊ ՚/ґ|$89jCwڍ Þ5u t=[qOno?]*fߒwgV|=.z#Eb< B:!K`1 0aJ@g_{7;0/Ł|j4T})Z"˽v+L+-eV e71j1FHI@ %I<gNYkiM MFLf8v0 A=33* e>2^5 i 3'{{='~ǓXF^Fc,oч`p<4_=ڍ7w1tӑos~-DF-e o+խ7߿_4.x1a:-@9l>]'DH]|fio2TռKf}.2x%@BPB/=) `w6Nm'OQBŧ&Dnr[9Ejloh<;79ML|~&C.%k͢ 73֩r'?5Za%yg tLĉ DDf5:dv/Swf0"[}ڟYOǦxyǦ@i^SNA =ٟ8as|/ dI drL$;)<:ꕙ'{Ws2)3/ѡK\ζ}^NRRoFS`T,BY7ȸN,aQ xLg|]V% vp@EqRsȽ[N>ٕ{ɧ댮X|VoIl.[ͫx|_)foy!Eep3f-ǘ{=|yżux97s:Z/PŻ[WAKKѪClEnGGq0BtZ*rfoor:H4)89j]ӸE2G#RwS#`Wgb5Ja1*Tr/lБ*_Rxc!qk̩wPm(3M2N[rQ(<b*?֊vSƩ(%ey]`LXbyqF.&?p fX&i Õ W$/WG';ʃ2[Ϩ;v:1hcOݱ:$䉋hLт2>W;x!y<(JyGlJ&tf E4I>{~MiD9hTgTn'BCkdBj:$䉋2%[l@] e ayAfo6aUbk#beӛy| v=׍MbqTY=ecp^yFrigO8vf[+tX+HΤ(Y>a+) Q~uS z@X7Xion8'-ҔణQiXT{$=%$c(~0Brr ENiT="Ѥ9ݸ@6Xxj9uJ! fa$Jx@$iilqWyLΠ_h{ L pDV 5{40f`=Kn{cReLNty[PX&2,!J+wA kQe l%fZ{@Z!.vV֢ nk)6ϜugC1 5rr!mTi(1yZҀK~©S IvN: I )|bfP=Vxwh)+ GAo`;Qv~P(*ujma,VS\|TqtNx2|D aiP}frXC0 b[4R<"OC(<6vIJlfy,&nN`wC_ƓpܾY``-Fkm\}W} j<4֏mĽ:ۘa&Eny>{0t0{wMb;j87V͏:Ӌn2#P<l(VUVjƚWAqy b {Cwf`ޮN/.f5"cEj\'B;T׸F |+^'FwYIIePe[RDc\r Bu=IE#ZNJYGBc[WWq"Nʪpϯucbp IIH aQ<4)6"Ӯ*ϰ$<,Qe5E^G뭟n<[;[#pHVXgv<ð z`x5" [*ŭQ䩬dUbwC^=jYB JlWDY}ꋅ,'#fv?Ml@*wWd8H+qݺY =3-=Z B I?jQBv8Ķdfbo>b 6j䲑6fݎl:g]kͿlGASNPP+2!4Bp !%Y$0 &ƾp+!v6)ymxkE>ݘ{fT󺸓[Z^^2^J&u9๨h<#dQ.>:?Oѧ XiR*'%UZg][.͟b׾Ĵ?z->5qmCD n?g;i| oƻ__4~\0۳_o|#߷?@گ;޵'˨w^a{Uw]lm-}y_.zsJeIN^| dht 2BtsfE[*%Rx"%2R2+Tfj%1G3 ' `LwkaJ [=coasK/!h=L/&Іv Dyb* 6o -a|OSm^D_/s`3˙·{˷~6wk]NuPyeÂ!~ 3od>/ObƫyЀJ5 W:^yV=teʈz8)Ds*I!=;Z9U[A:P n{FwDR1V_ =cʖmcS#U 32A=&=m2D37%s{Ed "@)m0adZ[oAc (?\2SGPP1[`ՉY՚zaڮlL SqHxy~pWgMw%*c?ĹՇ BLD`^lZ =f.<΍Eכ56mw\_\|tg)s/^RDΏ;#nfg`u7pڽ[V86wWIcH5\bꧯ{QX(4wz[`m+| `j0`&0ݜ&G_5IIԛoql!#Q_UWWUWWo/wLs^ZP=}"'d|i8yJP1=ZZZ-ciT05#L]AK Ig}9k!T:g#VSi"S?D,-8ñf%61[,=`'L4omۋ--iR %R׀OgX$+t[2fNvFgl0cfzE{PL)yh͞:8RBGQY? V;`Ey2yb-~%!ΖxÅzKN .\/rjMCn˳3#Ga[ƣw`F|G/6e%8=ߺ|y|Vu҃\V6Ƙ2_|@^a& w)_.Wی[p;/k3 .Q^|0Ȥ ^'ܝNYvp/=xʹȤ& )g3UăYhs#]h \l~nɉ RJPPB8XGƊ)"%<ґD qcЖjJP,QD;S}p O5ZҳJ aK7 !_!zJ 6D:[14[UoUQΏv|yfLFN'gqr׈)cb% <  ˭LA_@-}D4H@a|KD_3ͰBsZOnss(m^uRxjƇS fJBk%CS5 #.Qw"F:ry BȊ$@q/ yjZ# 5V][ƌnmhtP??ucL 5Š4u^˴PݵukhYֆ*HBƒݚbPFuQź]s)Fzn-kА\EkH󖻻8b%$b,)ʀ.%'9ƌB~%N;诞mt#d@,4ɽv2KOz?.^)J0Gq7f~Qo0J݀*G&7EMJ% Kii=PQF[>zt^-vQGr鞞 .ק`/9a+K0t`䁫N`Rɚ)cDn`f`c`JbBW(_2~]!{ςX=~nO6('חP䳛{uSz㍷y^3 Y9F?ZI]8]> Ө5A44Y9bfSI[xs b:9(:|L bE!%]q9tF >DS^?˂t3Źc4"Wo A݃|y 3TΘ14&H ys+󅁎J^XR r?@Q/:0*OT!ITcQe]>s5K\K Ebt)1GP`Ce 'ӐBmtY⣙Nr S#ݦ=1 ~g#F1-l{v@K_BB_:ښzȃ2{h)q^d:1ng/S(b! 1, yD?RJJdb2МhAFI Jf%DDADChb s{uw6v۽E+ 7nC~b%oct̐ԧ\~4?KفURP D6iwiXʹ!}#b&x0`cZCB ߎ*/z"gE.+k/ Zڰf)>t;K쟖_i@a+^($vCqrbEƂ 6D`9Ő+!-$ |VI X_g%$B/>X"#kbIq4dž0)yRY91VQYVC畗aLw|ZYpYa"HL :Hʼnズ|:1 GڄWBJ^'~|Mz5Uak& ReZYV*]#!ewrUA@c7Jp`{IOb+LG G:vk)g}%FI A!gJ@%0_.}* Q%Ik;g[_ؘzQϑĔ?fxILJhM_J5`΋]'_$\(6}4WW ف:|jϵ"εF,!J  x4 06jB{wU?|IH!N۰6< Z{oq 4v͍y4NzƎ/HU` V"2R%w9"k%m(]mO*aPsQp!]NLZvv?ۯ9W62b-n5 PlubAOB2CvpTo'L(efcc  %(( &_5%wS`^NF9 pN\XQj$vשcM:ݷv ğYgJ!>Y I6&teq< .+Դ]?I-BK u zX?1Ơ!Aݡ('.uOs9_>dYaK|K)H[:2.G dI2j2w]cZnv ݅/Q2ri67e6/d۳d9%= r4 8badg{AE0|3S$z ;4VO[2]8nyJ;7Yc}V6%rM|kRϗSsz?v0H>!_Fz|ƾ~=?=X }ݦ闂rd߾~ Ӥ>r-u'z);hp\p<&mRda69`zn-h0`:S{a%{}ҌtbvݛZt湳в̚72; {K>;\ Jtvt =~g6R IE%.)_sJrH,xwQ&,_>"أtSLd=t xr$G<݌2;it]=z$^|q@p}:% "~(x3Ib*C\r\S̮ȥЌ }Tfg]%jP{VLk)bwRޓ6n,W} a@'6y3#_ <RMRRK$^MyJĦ(:bn֏I'j3*/ *@b!m5TWR!I*yXf6m\^8Q\l!e|{d#rW| $eV<=- Ft3ERP=[8s9ڎ,H%¸e cKYwiKbS<+Wvm8w< HA7g0U +,QdHZULĩM]&ɰ<ȚS)? ZN~<#ƙUڷ83ӶBB",t\/W^IuYS7ݨ$% l'5RVӋ+R<&d+Rji> V0~<_:vL#oiu0ڀ"J:Kt<2CJqkHPFaϽ6vO@ hD泠##q)V|3&a22z=>_Ul8tP͗Aw%x_zu_x-}nv ݱiZ/JhR.ݥ mWFp<-тj莌SV OuwAo{,1Ҥ! ]%>s'(eF=ݍc+|r0YY>T-7%F{?PbCݗҺ1goi+dsW:a wR(T(}5 paM<0(BRB =At6!&h|HdJe^cwpr?Di!v|=Џ l|M3 VU@2x bW@ezѨ&`Q?i&bFX7/UR_2RlyLkmLĔ*3;:Q0WWF8t=# ӳI (( 2U67fVf,s"yQX`5/pEP5z ={gq>zsFQ,>f-/cuc~g%e3˗r4{E@ͷg$9:]gq.89!D%VOM\"T1z4S/5~q4z~>uu'R C_-RA(x yQtZN&镁@?|Q)MbJj՘'JɸՋd>q>1͉sC̚zgA¤e>R y ?ưcJ * [޻^+Iі.`mͲp+.|Ϭ4hvZtbRxֆ;z<$تQ^OVSU ߕ㭇WZ=U |_7{qxaPZ'yL/,ۯz%@XCE}J;w;ɷ&hkn0' Bk..KMJ-& A薨76ǚDw4Si օ:6 5 S^b ڎ_If6GFm*01[#" ]^,KtmM.,tf%ě;'.K\ Z]ڜC֨!iKBW[TCr*.Մ4Px2-ҙBۡh[8B6\y4 !r]&@ `i!Mr 6z}c0$ Y)HJjiC %)5Chcz4$û(Y O@?&+-qN^>tS}ӅB aB耲-MzlN q'+5%AoD*? @% k蹅 gԝ>n ^\ QS\JXs',pPn)^jZfSjкiBR8RcW{YqR9LBY,uzoơ7U.,K!F'*8ѝ2Oﮏ HPfרƤ??+0P^%D #J^(8xQ?ʈT¾ud7TE K<p-G~Ҷ+3>d% ߇uA[A( 76̮K$NhFtnZ 91gi{ō66`?h'z*nMXq± r nڞ9Aka 1Cڀx._g (F TQL8n11oiA#=j a85Y*@hy"9؋5K%?.;eoҧ`oDP xF%C]]߼6@]#[Y|ƞX"Nb3h1 ư΀ ,F%l&NL[ddʬdPFLȏS$6}"\'\ Xl(vԥ1ԹKiz5e"$vҔfSwst6V@uHSZ 3d zeڸ&P7ӎ4 )F r>O?gw}ӽҌd]]MKIsw2FwMGogQ=ś [w4+9s99jwCG&{-Yga )BO!! = ÀPE7 z;O3>׏ȝ] d8< }}%C Q1ʐ$['FA$!>w@>N{2mj5!F@% Vs}6TW@cv_8BV}ہOxg$2 !,,b{镧\Pc@$lwyk,UL@g7M}]kL7L0Ek2 >D9k =#ir@r(:oS^eM_v.:'[;l8V=EⳄ' ,ZΆ gTzJ3^Z"a߿ZXþgY dݡ3*P$ @-YI~$<*~Sϫ+8e?Ljr 3Yw/5x}yJn"@B;jԮ5hPذXȗV`fPi~c%w7ڭ(ɮBVH ZU(Z)u+R%ܬj=CuIl䭷'" 喢PF>F, IL)Q0CHz+@$:Ue H@:iQ7ݕE;Y>}?G C-ف;O /0k}~DnkD8;uRY+ΤQ(}VaDBHvݹ]b4Q er)7IөASXft }NB@y`OCBep*U2) g>f 40ԂRRl_)pkAlPЋC/P$IO7U&rJh被ЗψP;ia V6qNEi啺0(-nˬtsUOeVB Q*|u ygޑy8C) |C1`7_v3P8?9(N_huc~g%ߢi|+YS]`܃-Cr7v&WRPGo{Wu,xOR pWӧ䀕]تn34+9f,m#>̊+V2*QB=adqJujcݫ5MmԫhJ J(~ypJ"FoGK2LQEΕս $K"PDJE=AP$"ّF wB2ŽE|eOw%W2IQ% i||%Yd}H_c@U ^%3K=s75"*./Ԝ4U &n2Z AW`=Ҡ^kNy4FCǠ q%qWSƉdG;y2x1l<_*=; gwY#CeH+QoOwW꽍FǗմ9"isi,srkcEB4`0%Cxcn֡u9bPrj7ASNZJ[6Qܙ{Pc=P)ff&3PڻAtr`6$]eTՀS_5d'R)-ěF09=ª|ɶLv|e[xLwtȗcˋyZ޽Х4&,lէЋ#[Ќj:}硟a,_%[7t1|>5dO= Kug @Oc$uSAP׻lh/VÎ}ޫK_-4Esi>Ƞ;c œE 8H:[C*cNK#:$3#sɳ$ލkݓH;rFWi*)Mݩ#?Ic{va]w8:%{ۛ  yưWKJ(ntvă{ڵ!*U Zfmp3kQNʡ^ #T$$TQJt*{vu(nF[$DQ@m5IN&ICs@[O.|HL.)cN1ꁳKRǘr}S;DTh%%+u;MѳF?\ϟRAHE|kPKPZ;FƐ 67֖|w]>~񋙎ýouoBEGspAA׹p9,[@ ݷR[g t?N`J`{vduK؞$C8W0W,J2&[KE7iDjL$ϟi#g.D(qc̏7R[Xx|V[3;bLgH-z ڀ™($#2!$ hl|q_M0n JPY0hA2TJV㍼z󈭸;~ʽap/'dkfFz><,>GEzKOqY0$qv%ۇ5E6m?urYeNg@쾇Ni~wN{~Y-iM"Dv :mǶ`()AX6:{(AxGp6ͻ{8ƸwtJ z2Oql'l}y=Nn{Ojz{yv\ʋ/щ g.XK/&{= JhH@!RQXYhh(%Y,0lФ($);&n .6oyah09R٠ guQ. c}B&oRE>v>㧮`Tb?Y|[/Fʱip߾F>È~~={,AE9bԩi L<%p"(pV~|1~05o ;(AF5xtcΕ>5wU6oIGusYLlq_fuQĘ`=K ÜKrBRTfDJ"blÿ%1%hE,/|yvJ1_~'IGh,PPk-bDUQ\K lpJ0E=^),YCIh5f_&FRDV S-Q&9^Z!ҠrU-< D<loK$OBfvvPRWÚ)t*^1: =9ăz=(ϲ4c{ \ߗ>0_=^f[x)wϹDn..X|~>~~\0Gr_ V\_ܟяxbPϧ/㱋/ܧF.8/z?mc(Ro/OsEhzd%/;#<\ڪGg ؘT i246YRAnGB(۠z`0/CChk@5zyI2D@n`nWgGZjVVq֛;Z/"+Jnc@K ܢ1 )PqP@*u{+\m}zL@z7 g [1[ w֣ot]w/E;fseYTp6F-M|}Y{yd4視(1Iyz7}{2ZzgWlb|:A3f#}7s#睞y8txs1ɳ^='Ϛ@/4VOuʴ$̬.ݘi% y&zMItB8EF?IqMS]MޤHr.ÖK4l$εLzDex~}oY}7绛{Q @{އ#V:p?~F(.yVzߢMP\Uv8whMͮS!qt!\ixҏƷCO5Id$90o9tJO8%'bY *tPb%VN`R!U(|1*DCh B< 9b/^{(;N6>i3%rMͣY~{1;IЛaH? 84'Ɖpj?PlH-Nf f#SsS`- ) D av;Zێ "y} N;}#pcI732b4E"=Cn3fڿuuӨi%EVXF0#JrI +F2-ʪD 'qLI4wsfr9SG2ة8FیuS_8q1T UIuSB [I+PI Zjt4܏0UЬ`;E3tky:=}'NL J-ܢxN: EĻlL~D0LvxT2`.']7EsSFgjrO w40~'i^MaOc :1խ. LZ`]5^{RJn%)pAVa(y:Zt~Rndй '"眕DnlAVʙd%r? rƒ]bhw2^-wӻ-k}1S}D` M$.\4~ ZVfzc!*~{,J 1k:CF sڊ875b]T #*R" |p&];ۇH<\`X6B/9X!M }ҍ^Δ#[k1E3u"iJ1bAGq8 7F%qqZ:^t6Ҩ 80g&m}m(d[Y d@)K 1ԖSEZ4#Qسk)8iEK al.ٸ>j!5BTI{i)"gP p{р sy8c'W1L(H!9|;%vM֓0Da!Dl*`ޜYbL').iP-hֆsͱ) #΍M!n:1gn\E-[٭ y&zMq,$-]kut![._!ܽMiS6Nԅz>yRNg!hsQNW>vjm}*_Ak7_b:]׾LžEIe ɲ" ="X$`Y/%8@#(m@lϣKvDcCq(g)App"gH 'w&:lcڠwhmS(}HhZ!WT* Lm} jccxSn=Ӝ0: #$~a !:!; 穱i'&>}^{ygH B1 %Qa*,Ҕ߰Br-'LPJTskV別 }PM@F4<ԭBZ<iЫj1sV1\-V9H=by(lY$QjKvAtZi&VNbC2Ԟq;Dreu,J[][o#+Ƽ,=%<%s$9g5xQ"sdy2"}iYҘd_b jX,*K##NB4@-b* ˣ+=ڐU6?* Y? ȵs-l[*<=45| YϨb^Zg?SbAU+Tj RDt%h0%T8AG5)ڟS H>Z rַD h׳n+mlG3j9.(igTE:QF_6N.9F.9F'c<4?+Z3Ak[ $Tۯi iBZBw=:ǻQ%AX'DR깱F]>tI%Y9\9c9ꁕa|X%TH5<qԬ :Svstu6OGh[__1w)}EҚ*EA1rYq_HE#^ 뱎;,l0hoXb簋klB4Wh1(x0Ȃ܋P1츋`neg#jwfy䲕^o~E*G A~kޥw\ TtO 2f yepUΞR]0C#%8ECq4Nq(u6U!R{QYۧwQYywQYYFemQ+*aV;fpήp !n_t J8E=_t}F76"ݠs _)^V 4swl5n\%Q3SZz]36tvn㠿0Lb+fl>=x$d>4&L3"nX0s 8 ='xr8$rbē9(b|7|! /x4Z\Eǻٮn3XY}g !A BZ.r[Jh㙢7f0уmesUna'K[m)>cpF ˆ}uWirina6S|«RkQ5ҔF9G A;@}XyPŲC1ŵa*[qsem|yG1&t8{w/Ske:ue素t{2 )잾wb# ?0o;ߒO9!ww߿j=pyŷWxAD46흨AGi7~[.Ay Mo64fﮯc}hUTJVzk"Ҋkj/~5y\  ga0Ѿ4jcdy :ΰ(FM1t͏RL EH z|N dLr$X Ŕ>ҿ*ϕY&-y7Àms/?^v/]]A/; ksMfG<R P6+ӀlhhH.e2X`oX:7o ޿RpaV΃gkBHfHJ`똄f7-E՞@4^ku:"MN@K5Mk2@eהֈؓ&crPGYX:cstvAh4iФ%7R[%׼_%1&KpAUF'FX s@RE4y@cM`Á0i"Z,RcaX6 X;( W*l2sª1z@OL2hQs@wLB kX`f ^7DDپf&0\?&DL׆D?{(tZ5kMCB# J9: kD [`fn sYPپ?X2_?]JN52uF5rNk$V@6H :3*#|p@IzB`To?Qc*TRȵ]xw&r 6"ǻ6r4ڎvhhx @xRmiz]*AT#F&A^-\vuW$yxrݴ:78xLn4gJ v< Bzф3VaK8< Vi>hT |[<}@Jq(ay%).=(A>d/[cIu[((#h2ά&8$\M4ol9nWzfG5+>;*9?q7js<6w-=^ZFeDVdY>igfGc27dmN3ЅNp'~ybW 0/hbW 0S4?]ZJV93ZY-M6"a=fDvӣx7*(h4ӣO ǻ1 6J̕3J')|Zvy)7=?X~.f^6'ڸ_弣{d!T.h*\/͉6U/UoQQ8U[l=>NzΫZ{TLfK>[OH-6frCeQNҮh<]_٨#s'<#>EFO9Y?u ԼӦKM\_\Mӫ§ ?觲 %a@%pJ0a&T9Og7ʪeԓ!#o׳nAl^Ngs<CUemppK91S8@}PLeP#t[;TdۧGGx7&8V1FD6ۧGwcD5a)5Sh7_xKxC>=:ǻQkfֆQlR}1*~k/GPQ!Ud[{ JX s1TOWIF A{n?B:&8aT=JNay:Lxb%a?u0)um.qT߶)frOU{U~|_ۮTb;nL?)XêsX&Rm.޹gEX-El&p+e Hj!Zi#ZTXqPBIU"ĝցXA%R3*i֥BP=РmD #Rc+DN gPVhp%{R&-2ء VZr)zgx?>0!8%pc xt8`pbknp!%HDdN纓ת+oy~Cg6ۋؕHu&cX&$Vo_>UY|x{7뿦‡[Y'WƷܲgKҽ[a h}`8i%|к :mtn] nłZ y-8JuC6X)W0Ѭ[֭C^8E8HϕiiV ~pl3(\SS̓ %ZmZ=)f?{k~g6 90b y-TtoNƁu#ԟZRP.mB0m m݊͵n5)Z)62WкQI`JuC6X WiiX֭X\V!/%bV|mk Z4[k J#$T2&;JgRj(+[թ Jg\*⃳J9ian!&]6A ev׼uuhBd)aQP|17C^8E8%gC&,JuC6X ulhV,huN"NI1{кV4Ä֭T9)mutx[֭C^8E 8%8@^kɰ}k9el ?;%Sy& y-D=go@7 :Q/#V2xLH+ Hqe` %m,5Q+JvG~їf7ra~.yÛ]%cde)M7Tٻ綍$.=tU[aǷIɒ\I3LO&m#_펋LjBSuW+UZhxHW4UZ(#^V$^$Wu07ˊ:>8ҳb=q/cpv)N?=β?|úLyLly9qEc۽)Fэ%/fUosͥ6cRŔҤ٨𧋋5a띊B&rxc&j% qk 20=,/s/\LB$9ǨHj^$ 90  YIfiOB$ yIЦ$@ZU\5n_q 90 ~2=^3Z,Dr`!%Nffǜ+:liӦf$@n8Lie'B=vklND,B ,mx_Lc҆L$LQG!뇙ßξ摡 LF "W15GwY!QsKłH:#Xy!L#w/huaA.s_4 j ^[Q{kuànqgg< Tt9o%ţO㮥K?[)ƻcF#c8lO'Jw?[FL~ Q5f̦cʟ5OYIUJS}de >ÝMps QaCLTjMT TO!hB\e_ƭ/bU xR3ڦWw>b""sq.Iox 4T K{m},Er"!ڈ ٜ_5!AcԑBld,PHX*,^OGa+kE՛V!:FCjg*LV0!ۧrӰxۛ/A=:<-sUΥ«nB>e3EU_:xsX`٭89C9=Pjbt"oQSue}Un17E({UoR'q%P﫭R<BL" j@8ؙ j8K2X~/bmuF{](⟗X 2 D"|2 cǡ^- b0. b0nFx$0amI X/N Ro0 q%өT>PH͇ 3eTաH5謷ֵ۪a^7|oak[- R*[EC*,U*b@KO ZT5xC -) #jFy4F E^Jb.B[X}[WJNene?޸aڧ"٠†U۪cTc۩ n! NuGu2U.Dh"EU8LeɌRS`0Ny4m gr h*P+R,uc:)Fx5f1"g]d<%VscVCS:А: ; iXT2…ImF%9弻@ 3()>ܦ"~\=PW NE 9?.rIL"D"ðFH&ɌUn1JU[~DW f8!,pܜU_UQI'AՀ"hD,/E=༜`< :`%&ً5J[QQTF pHq_˃n͔4;rumoS+x om2QG+xB I?Dfr5ޟaDyCU4$Ŝ8@$ILbaGO Sp_ m?Ѿ=XAa 8jf_˗* P%}RS[CdPN.[pbsI/&X>\nbWT $y/2 b%^DlJ8 ϵŔf72PNl+ҚT KYОl,"c%"* 'a8ICLR3 R<& x HLgzlmcs K>r&:kø&:khPRkͩgJJ|WydJ]I¤D(eR SE\HiC-M;jti{\vo4wҼvyF'J]4 4i+I^XPT)xd^P"u`RBh4747|K:pEiƠIqc&r^ q%`!eq> Zӆ gSxP$:FQ L1M3Lv#6JQj2lnMƀ'X3ɀ DWN4;X6`m=[T;XuqQJ{j*6fe#(l3:xMxvayx"vAYUi",u5 ⾪"MXPAO,閌"@vK-I»bcn6#͞%wfcSu" Q/8l3aM!j`$VLjhGT:$xk=2kfccn6#^*yTIk1 BĦO :F4LMje\)3H%#PH((ވh dRZ8gF{$(L0јq )ƂR)N W3q*&xf{%aGxU;]=\m?2|]Sl(7u%ϳs AD$&*N#}mlX#~9&WEΝk8}ws=T"@(|Z՗7xF4^(-]Y7]dMEV,J eg*DPZ䁷m'64[8JʵbR`úLXu.x*A.<0U<gd {ٿb^ず9Ly_z[G7*$w|c\7_O>鎀B\ip ΅@uRJH RJC"\!UY .vs "!0`\ǽ:J?vMS >3QB hY,'f^8a_tEI<|oFzc'x/#_N/ jTB~VǮ5x!2  5@.$ @`-k 8>z0 *bZ1 ƓkOj#@Թƌ6r 9:ȫ9x@ۧu*)ꧤ{' *S(~0Yt׽4,^=Fw}c2&-G.],s/Nj~r Y.nΘ2510 nы]\8BѤ5gYEic(^)"KBu>ok BKѡmg Q,tYgIὈFbg2ʂ qӻzͨ`fy?&0Ltt"V ٖΤZQ2={3JF TH=Ig+x{Cpc3J)ڛLP+; c& -dI a  !]ݘۚY.pg,>m-- wI'r%;cbÛ3-vf.4ʺW!s'Y<e>f x=fq͡0.\"ͼMM1xF% YiiL3+Kq3 >^ڣlup~uf}%s0iYYo7~LPK&x}/]O#Ǫ1NN#T=zlbeR\-x\٪`ʙ5o+w+V{AdhVMy5%< ܆iV膗bWa,%k5(RH˿qpXˇgUgj'h|x2f l~\)p[fa*SKʿ0SJ2W C1%BSnRIG~9Wh&$u1SeU/nZ(*ɏr{Ԛx=^ez= ŒٍR봏t0gnU+0z.~k7]}o2]|Sl;Nj(MX<t0 3_$|~d4V@{݈xg8^ 'Mz<7 duSL~4\⼋W<KiMdE. 6%,^|:R|-ʹ=Y9^֟]oqjGu f??s䯮ߟS]NX1痿4ޣ[5,}l$盤gL8Ntzxh. nZ;pchXѾ3.Pr1X+?Y-M@bdֿ\ט-]s6tNA! -c@Lf^2ܽ`@h7){E7=Fc>< KF=z@҄&qi5̴ԪFkz3/N'믦\ŏ:edV"tQ[-S+"( [%AGp.J:ymG5Q͵%KƠ]kdup%$eRDž%IkX)QLL dh'=rRo~-vym8ICQ.f+ek\(nEQ^1rFVɷ]:Y,WupG3y2L=V܏}t-`dv|VW)Fz21AovV,J1!v{@vyN69VOO.'ݡhb烗6A)m~{ l#v=/jv>>〝3)ft|Guߋlɛ>dof>f[^slչqX 'ڋ h.oP_cE%gx1[M˺ഩ.rw 鷷cl;FoWdRͭi2Qe;E#GYQo&Hd]6}=u]F].w ~ ^6:r>L/hCly1'bt_Wm|,ޖ=|hcG˺p;/׵oq9(sV^QF̹L l[e*k)/F!*"6Oѐ=1nS,zw rw!6ᯟ^\ p+I4jƒUp0A'0<4ABG3ABt{Ïnsp8mg- o˝bÏ' 75^مw c0C8E+k#}y {иIW4f׭E"Qg8[m.2Rs"$Ey+L̜1EՈCP7Ozxx2s/ ]^;%¿TEqEjg0 \~Da \ 0LmT^S0G`BwG0; hȆD;;2X%I< @0vp`g=0 a]aה"! %ԙei(ݖ1F^I0.ȣ]\0^tP(EgƋnvӍ3C rEbQq6v2nhF,.yZ X4EyeI;\ST$F2 duhXr\\IhaHc}L\Hxr`%xU?{-"y05Co<6Pp,C}Fi$I9S0`@Iv"09?Zm|$I{aظ׎)x6Fz ]AOc@H)zw yN 7U[MO}n.PYha" 8UӶ),^> ::ղq_@r}b⣷%b.$ągE ͡"ŃsNll$WD\ 3 RV VuCݏKHFݯlDߢ Mf ؂o"tfwpGX#Pon6Vvx~`e;́(jOjaq)t_N |2UYo>?|c`6?}"+l܂.~OGQo-L`0JݤoPD1[s\Jca w&'H2E{_rp~Ԗȅ{3z}V>FCog{sGNN;x Kऎ䋊]Ҩ;Ү NcuW9˻1n Ph eRWRhj`Ӿ,JTy,9>kt%f'}(Ȓ,r]dKEф97% Gs;ĶP,2k|6KeRhKxjCJ%ջL b1F&#Ke wy^.Yhq.5 ~ ;:Uzz>]W裤˿}o ͑@_~^2>e^~z+`:~eG^qZOK 3'B#Xu۳ӷz,;#9z5;W? _җZWI|҂HOggCҀq8cWIO\w"%64 |7N1xrz!>::G?fWv|o1:/NA4dv9/UOBꈶn `!dAԛjC 7' _rԼ#q9A=I<[PUS9zy1j6|i|O+ EIP,OM(f7gףAF{qXϬ04"A6mWnrYw?.Wknq8n0Vh{abST$[(]??~Ml9DI[\mBY"8AT=6O/VU*YqXc=%FIܩ-ŇM\lX~!?^!؜A ѕs3a! p-GzJI5>qoo;[q( WbQx3*Z8!:Se*D\cFIn3erF%e);gD-ab13/T=־3rY:A v(!fz$ (Wĥ9 Ʊ6< M)g#uӼt'3u"' T _)HVv29R`"KM*@N&rP_a&壻R>j }-W#8uI?5NoMfFIiw.IuCLgCIQiGI@QF= Q\6/D9ԃwWyRPBGQq S6^+ ػFndW p42/ TJ'5_ {9=z-wuÕֻa1e^O<<|PIYϯv~^۴+lPs+VznurMG5ļ9 nNf>~yA.Lc?uNbVYd{yt}݅ \R=#}j#)|F<͵|h<ϴHq[KyI\,1/?m>/C {8{Ƹ uܷ #D{ es{g7o<fٷ #4n| q6#Z3;D?}eywuÐ;19R϶k.}䎾Ux B G.;; ,9kzŵ0')ܳNtC&8uJg}&T!u NOkn~M HfM-3ZpiĮw&IƓPe z6VHęєOOQ潅Z:uWCA8IZQ/t"SxV'ew\;@#8Վ)N8$:=ş v6&p˞b4^d4!RRў$fǖשAJEVR;Md^JV=>>z}QL]U})V8A?D6qZv'*ږwrm) ٢Kp|D7-* `N1dsRʨRbyzumۮJC K)')Qz?I!h7KE'˓RQoqY,r)8ӰKAS$|僆m/q0xZO o<HMAJF\SQ(8$mV o Aύ!y8]iQ1-a U`@ǓMP,^V x6 ٠y0Z.,* 9,m( Ls( [4_Q(8eUioH;+ 0/퉮TQg3x)kwI]#"'ӭn${TJHTxX~P3Ik<ա 2TsS V?HygU_3SJi}F2|jorb0xC2&M( 'VA!WĢBP B(@g 70UY1M0`g]~95e)7'4yW$Orkڄ}^԰OS 5ޝQY75 HP5iQ:WMv*y5ݾ󣭥[Il(LV X\WTI4xSxoVMN$#Y zDSYz!Mqal54.Ob?\^}8P ER|?82%96լM!O%YhzrH'WO=VMerM;'a&`!Ǻi!sôMW xel+P!hּ ~Ti "?.s)m2/;:Yo٫@j==3<[-}Q)HҼ;XϾ3}'Aйf 3/?f;o^m5 ^?;_"E|( `YcI #\3MgkGTP!+H01Gyuv0?_ޭ XfXYf 5͌ml݂,ߵJ_]_hExmf\ٝqYS_~߼.tF|Nxy/^gSw6/($Z-H.VX /7(Y8RqH>Tt?V7 q`| ._揋^._"/sl |1C}X@()>|z/k݁f ~ܻX%|y?^55ysEt"pXI0:WSǼ),םnmZw+iw:OvAޭ P[߰p |C*9^P+ԈoȐo/oȰooȈo(oȘo8o 5<)@CGm@PTL*P朇_ jd oh}Ch}Ch}C>2:Z߰Xdyl!ޑhvAUy݃NE^ܡhW~9/axzP_=}mvG\nyV+'(.?m_E~Ƹ6D;G{,#VnM^ J+jV.!kX׵ѮD^ûeMz=}2 ~| r@^$WgvW*,XX ʨNlU[w+\evCY=-\E#tŒ݀'֍Q`b1(:mTn25*iRhU4J4S&uŠĶQźucT֬[|ܺuK!#W(kZb8wŠĶQźuQJd3SJZ2r)-{-{j8ý*DbPj6JQ" Sўu=+UH!#W(RQM~L#1(GQie]ȔV٥eW 4d*SȾyb-.[$T'J֭*-EY-\EtJg=nRbeAՉmu2RH^FH -\Et["9e~N) C;ZNYS`R&vfz2ʦLP Њ*:eMG)ƕ VB)SeTIY EY JS5e5LY I\i1)>ԟ St&eRi"adtMtLuJE֠׊)6E֒2Aq1ȚDx` \5N赒SdmeVlh5SxML E$JYKJW1z-\1OZS - 6-"kICA)Sdm%exx5&"kSd-)Dz]F8Bә)  1ƩSdm%e#tx5.YKQ.#k\YKXEiDYEdMкQ5%"kSd-)Ë EY"kI^d fYKVy2&GSdm%ebdM*)YKʄ5)ʦ3kSd-)Uhx5%Y"ki %^dMNQD4lx5M[1OQD4lx5T rdÈ<=_7;N\&מۛj}aoPY/,2bH|NspjKX } FO7za/M_K6epyhx+0L}A>hv$Zӏ\ |۱|X|i9_<%|?͗s_ؿcPe yi``Ei4ex ggfk +qxd`ՄsI s%\PĽ3B9>A8jT3!͌),6Y &s"uf6 !VĨ i,QALϜW% aֆspA +-<{m0v|owb!Vfɭ1fk oaz5ysEs9R9#7V 9ng}^/U^t eFXF(bD+Ҁ%Higy0)Ab㭗L9F*ɸ%bOAMby-K2|>-"J! jDslz@Iy%&!^ "hBe&x E> BujpLs+b'c$xjc2y3㰖V !ÇcƼXa/B`"6eS@]줡+ VDb!M!dۊu8RsbKZE` KdUbX"d^OE鎛M Qcx|h,H߿%o,?+=0)x _؃vn""A4Yxw:~|}xݴ|__ g=\`?LCx>XPe}\+:Mn?͹ ?o4z ÖK̰=p1OJRό1 X:%C6dE1H.X>!`f"u1 )۽ 9-nn{WW1h{6l~駎}oXIB4,drgVRv4kݨ31C>-ܩ5Ngn;?Utcg4Y?\B3YgIBk?RoaRv]$Y[sƣ39;?`Ubvzp*;LJá'{o! S^V.BK; FONl?q/,87:bGc6 69: O`f/g~~_Ynotl̽;8GSm҄iXaSB<}|U_ ?o1v$YLMHN1~߇8~8l3# ၹCnU`9"-it atJ5D kl =6W~8&'hX`w>pۙLAVޝFUshZ]<lBrkdI^bp)$]̃NZ==sH Su8J 4$0HRj͙>0qD m*)+ 2Ñ]FeHԥ6  RE&E4c1́%@xLrC$R+ĸlaXhI{.c!ް%0UYP0_z5[4RC)1c15 '7oHXS%0~xň޺b0<}?tTIZ4&'>EÔݘ^=AjUB0񬈳77+Hidz~z;Cg"%9W׻vWܟ3k=O 1P;a[}uVٓcVż!v·aNCpKVDI8J% :iَ#zWa'$Hn:{zֿ35|KEah\Y5)Ue4& 5pkL&XQ^ No^dH*sW=Eb#v ղĞ<4&tSZvH"ߌLpfgG <ҵ?%+R!;?vmޅ;T0^L]?j7>]%8zYKK4>DH0Z$F8Xb 0B8c՛Iq8`<'?Yd4oOxx,$ A:K gEA xr@7[-NKh.cٚ4ơf}0isz8gMV!Z.= .[-[,:r2-:q");!`Y%m = C'03BXbI )+5սqs@jG#;-VL6HVKHFR͸K)K[J׷6$lj5G1)i&JM% gc$c7"BFn;r;.b^iCkcMoC(ehކx`XH菮yE׉?MBSvmgUB=2$MAH%/r+1;֤F{yOjD;Fox> }߲ѯ{?)Q*c2)(^ȹ $2&kq5p!8x0IN *ueSV]$}riQpoSYxdXSo ©D)Z/Q8,R1a ؆ u u%J&u4xJ+F4F J roA龛A7 fPvB]4qifsK%,=t)!n- D!T#Aכ] nWBmR@#%+]`}xvrxI Ƅy(P-qӕ&RÝ%_kIyZQ%9/yHsHAwBT|:@E[aƆ-aedor~ NnUuzwΞn}n': 4^U0uѰTgT?_A4>uiɐ6Q2wkE&tm8(*.kJ-Y R|Ȋ3E)嫆g w,A B2&R8[ (?~EW1_nY*5Q#i@5]P!~.̀rs6_ ^eh}Oy#mb0J&XDVpU?q؛ۏzIJ}0a&&iZLt&;XD|#>6HЁUE O8zh⑩>GC$SuPvՓ~"s)""DiWy-{(B?c AK ;qa,I-%8eЦJYlޒ̄# ~R0@Q|5 $\r̤8x!) @Kko<@yWhL_\҇#{k!x-̃ݬ] " ucqpZ^:rMcCrZL2CvS"*InqFd;@WJ4>UnLi|D4>Ě+6dW`DK>)Dt(V$ WTB1e%6ErftYe\ wa#,K|ܳD*KT6{:]lTFڄtr J\T!lH_>z4Ix9|_y%,lp$nRc6>[pQhkn)ՀyBQM(䄠|7WWY5"UI}O[*y5ǤDʴUs S,тfIJb>cMԎVHՒ"l F0Cz4ɓ[_[ri``I:A9刡8MզAZQ41MGAS8=?Ingsoy_ DYAd%4,hdu4vY20DӀdzuL]x[׎>^p\X ٥I2 fc3uBG_nXQCjI]Cs 8ohNF_(ѵK~ɗ &8QUngF"DHX+#h\*~W"Fc}wc˫AJ$+ɛ47 sZ!N+ڐO+J!di}`nωqAu3P茠d^̧f<)7TLOQ@5%\ 0pݹSN3gYQM.T&I;w!/*T쏙:Qt+y8]<߳y39jÏ<P^mB(TeXlETI4F.roL_t6<\ڐ_XTklUbzFMKưz@`*<}h릎"duN2'sAh{FkK.re6bm;mTࡑjӮԃǨd;NHXs!S8l(T7i>¡W D8m+@|Z$oҤfo XEl(#uOK'L&}Ԏ}o˔a薥ޅ9ִo4Pf 9`.av، po3f+{2NJs}kf(XltAS<" ' иd]U>_\T&K&0W 5,OHyTUh/qA-!IÖeR= WT&rVNR$GJ{%:X]+ј>FBHQ>?/&@Ok~_MsҎ>?Pţ f6i6L"NQ mocDp-;l ?Hn aYoCjuGBYo?aD /-J7`,9:LtJ ?B )|: kWZ;H>eG a:o{3qX4DzhiO"QhSwmmJ~9J&Yb =/$;vƶ<.o%ۭ[[Jd0IVUXd wsEZ܅Ů~ w۪nQ T/j::7>{WƓ`RΛ[ێ:Rpmg&sގ0FxG80qBXIJJ}2}PҭvMq=a/a/ 'cƂ}(](l[RJ *եPUX 6@3sg5Shr@ O'ay<7x<7s]A= VR" 0+ x*~nds ILq;P@&(024JWdlJ*OrTfS<QZk)̕ $y"|0 :$QqᇋA立yMP\&wv{?ZT6|aLo7.FWՇ*&T/}|CQ3*3.-N&+G$$WEe+C݅񳪍ku-l<bqAϯw xzvz2hKuX|c:XO{Ofp 7G"Ve'?;%>*G5 0f>,dasWL:uX48-EA868"f^vz_ hpn4h5+1qUDO/^ζ'U8V_>tO7W$ NڻYK9Яܩ%yfţ Y߿R0Z:T (r))uƺR9_<(Ǵ))0 33.x4dX+9/&3m/W4Z)h+dQѠ7sok, D-[6]\zעK wW~fGh槳- Ok5b|3ecTY_sͽ:{+n܋?YHO;r)kIBs=\4!7Tr1H1h#]z?=Tֆ|"%Sާjqnjz#j\ RD'w&툧MTv-{ڭ EtMtLFxq^ 2ӏFy|LAt4 }_]\~ۛr9eOH\>y8y3BUKERfʂQ,4 T"3NXiQ$nq(6nJA);HjT"6RRpiQSt0m5hk:D"rK:0?lGpc=QL?B~ =\!tu0B W [ީ(Eby(QҠZ ͭ ՄnCV﯃_| 4h)=D*ID!OiX4^q-|1|n^$H\ݨI sTpPzWF0?p:( f%8Q0r}om*lMk6!ZPb7V( )Cɉ.i*8'*ؠ)V Z~CяoƆP)h))9L;ʢ(@Ђdqf,Qx<ϡ*Eu)اXg39@Yڑ"NuǦul%uZx! N5q ~\*nʑ(JdӺbc *Qr V͙Lh,N[2V{}̕v  }?[J7f6>Èmҧ0Ki"}2B9-BQPU=au'Olk?eS2'J^PG|(%=0P%Gv(QA?vN£ z*#*&"1J4@ФO]|fR_)"}GN2i4XfD0ְxAv; WMU4V}՛rj/|2tǝN>P mYIWLuj$uZ Nɳ%6oJqf~p㼖蕁T@jubg˯o|`gww!;9glQ`c)Y")#0Dd=>%L=)hV :W@-\yaJ5Mx7F7'cFj- nG?T-ERӿ|UEƤ)xu/./V١D#Ε'_mLU'ه}|j9jsA77 )UBt2g_/T҇}ʪB`ȕR( #~crX_+E2] 0J3S'3(q3o"!wFH0Qh^"7)2rg/YJ$$)%EԀ"Q]7 _#UL6!ݍDو;+IP$'1*w' j2zWU\IO|Z >6!SY®\0Ε 2 r4f;l+J .+bho`2-!D;/u8eO/天eSb-h,V#JfWu4)7n86QyGԳ^V7K$5${v9w=4}*- T7Nv *^ *[CR QZ cPЈRqkB"|9[* [$Wib \ݒ^$yJ礖 TL`%P*Fqx\J8{:?47S̨6= f GpJSTsOaЎ\$3EI[&wqyCio5击9 JhH kmVuTs}c1M?h[ z EbТ4PwtvNf_$!^cM/맗\㺎Uˬ]`l≁<\g|L>]{椆8yֻqQ.[9~9H%oT d8l8@4D :.T(:XjVrRyO1ZF)@4HXA,X}A(I= _,Fi71r@NBx9?sF~HoyjiLɕQVZ Cayw>NiC(TިͅkwW%/ch3(r԰jZmbt̠^,ߺVtРd| ]@<ޔZj&eYBKAb[mK@q,^=t]N-w;N+d#\dDF쓧 " uU >Ktԝ}Lapϙ _ٙio6R ?:;L;7yOj]߉}_/r0 ][z' 1'l;8e5&>GJfwhrE/>epcQ CDy!뼆, nwǖ iGe&C;g04bg̜6|wnp .ĸipdG{HO/{Nq0M(?z$'-h)G4sGkV/xH 'Ȥ=qGcdHH="Uݔ4#ݰ5Km&^:B68ǺR0kM< NZå'\pl8F &dW|o41UHWvs|˂`GկD%/*?s|Pj40 Ƞ1JD$.<%!@ )#omBC+Br&G{j2AovU=aV].Pk @Coja?޾pqS{&8:+O. .7=͞E4 &U|puyIM2:u7v0w'{ҞN,8wv޵q,Be`TeE$<-Ɗiّ;[=$M̐<kWWW$>} _tY6rZ.P/oujW?b ci]PHcqX0&^VLcni#/'cEHSff tj l檻/w?WT/*WLJ 㜫n>~lo/I#C熖?o"H(5z\S{FOnS"~yOi=ߩHS'~DSCAy7scLȜMlڃq(7Oo6s"tvop:6ٙ [}t6|CyK6:A\O}䁙AdY_I(:{a)=fk StM_pM @sP Bi,P :M=jɻ5i-JuYEUkf'ԕ[J7((uv렔J֨I%d Bjs 9O+Q]K1etkZIC4JMp*^]SI x+ B|%%aS:~M*߳,rutw6Ӕ%>ش &BDE󡣢 Fh? "2XŤShk*O耙tՁw4O۰%oa%S]Z;~VDBWUgRQD+$D}ffŽгl4SPzގ|(r|ⶣ~Z8yHG$ YG4ZMW1p7YD: Sbm( l&.晅yZ_Sλa#y%ԡw0N4HE2's))gЌpKeZcTMj`F S k(*8@PsIsr3(BbT䈍Bqcht;BX_AeoJ*dgOϼk g{痓 O_ϟaeyʺH۹D}ysmL:a5·ueµՐ0S6؊H#έ{n% S-Rubꂟ{ 0fҢ '<Q4 5lGy^Vl֚[5.bKf1 yPK[Hd 6P \`SVEX^s+݉My+$l@zY:Vqrq_|q{W3ۓpػ_p?ЩUːw^ӄ=QϏ~F8>`Ɋa;?ۋI/9 doֆ'm`Z<2 &(y$YB 8ŵP8d1S81Ǘ9DgFc4!z'qeJP +ƙI>5و(ֆ'2`=<)${4&? J[ }cRm_KVEA&2=l$oe@wV'QSZQ1S2NQ>4Y1 o?_ErnmBbH<*J'Q9tQ2p>՞Wfp4-{JO(Agѵʦ &Eibr;0l_2a3|橌r/cˀ"a8K %) /E4?)?Cw È3T}-Z:4QD>

f \i|78I~{[F3תHrg)=2_JA53,W-I_#n??fdRҼLJVmŃ**s;W9s;Wr{Ҕfrm \&M&z֒Dp,ea2/y5?_.dkͻ6Ouc Ȫ!Lb6wNbu32*놷eD2~{ʥ4tf#!營d\l*lmʧ1=B) ODQ"1m,R-B+_`$X|v}Q'-`\ͦĀsL]ʋ4t) f`*g0ڛi *$f%ʢH|CJ$M]@V)vDм| Y8M2R [hdz9#P눝`.tCP SgAm歅e= ZU%/5wCEc@EHJSabBPg Rۀ$ЊP/drGilZSJICc#W˪VWr5"#3 &údZ8B~u_̌Sw_?/mn_>Ҫ;!OMpAAEHm?qϐٮCpqoc } Ь,WiE ZMX6 `S uW84yzs=АTx!͞e}vGApo3~W RqS菸*!;) =glu(6= =%ZulQDq_ Y8a$^㯹(z󞉳3$/$Pt>XW=UvA3F"Ffh wO kuTPcqD!Kc]_)FG ӹLJ r Ԛ60T Ո})B} h頔i\hZD W:QO)}(y(TXp(b'!DH)C|C@@)te$"v9&wFN1MdBU.6fIaVx=0oڗ($~)Hvj\m5HM!qa',|` Pdp蘀f 톙fæ(eqӬH=f!QZizhKttnZ/8v6`֌+)BI@TxJ$|jv'n9ʙB 1DG,kV(@7[ O8v "2L rG'J1X]!7ofb#3S01TyK(H(>V1QtT%D`cjgI8|}oK Ռ6:}m/(X%R5"-OaS@= $-A=)l XvetץdyhY6n6h `I[2b" D%`ՁbBaZ>" =D xR=g=fΎN?Ŭ|Bm+r4XKаB^]5J($;^1pMext@i ]&`9yӖdV%E`5 7uTlH~ƙ 3SXv\cn59}D7Ɍs:_%'ݜYv';^:- O/u?~*}EJS)+94R^A ~6S:/VӲ*c햫4S*|nmh/3y$Qј0P o F$UoH308*d5Hz,55C6z[<R' ^abZZ SD&Ľ>鹠MXDI4?UHf 0U_AHU43 zO]Z)-NԖ6)2,2%%ˏ7ʛSՇqlm¥֓_Pe2fb|I٤90_/_]\ަrjgf>Y ;𛴹tkT77NDnb5z@LUW_G(1lr?B GVp:tI'}o8 -x"w8FNQ9%ݛ_l'"ỳU&t{]ô!@EWY@ѫ$̢Jݩ墠HS(8u\8J *Sk_m,dZoo)?L~I'miL9tNLāӁ{O^8B2g (ZCHB !D7*ݝDLnB!m_σIno'7[?8I[a-x..?|f/ןL';\xҒ 1e&ax|A:Q m ,>7B9Õ Sjm2]mx _ևhF{tDI`HN؎/c Bѻ5bUZKQQReu_:Q(v';"MV1ނ,Ny k[rD&+3cW|ak`H-/zt\|(.(Qр+_4<$"Vn Nz݋Fjja./^ΉH=<,DhJg?RF81)#Cuti#|~HIM^;aǀrC-=&I !4ƂPӓ1S3K8b)UZ.X~vd1UP8aAra'RY,HDSG4ш^Ra#+ I0̴8U۝)>92S1yUDV4 ^[ S*iE w !E4~aҤIUZ QLe~bLȢfi C0SB9~IGi"{-,8"1@=Ѩ߈ux[2룸Jpc6@I:*N4ђk]0m $9ǃ|F| T;8a0ꯎഀ8>bT+^6ޢ8 >E e.D=s WS87.$[Mw>}H]9*4̮?&qutƮr eXIe~v@DSh}hΥTTMC =4]HP]FOq#ZՅ Sп]3NNyG7 7=Jhz/(1#ts-Ѓ:R`ŽvemA%wљ%Eu .,ȧi]}}~σ7+\MŌP<Jf_Ձoe$mbkGX,gll׿Oޱ8%*<$ǁ .I i.wen ,DUL8*z|2"/=H󅽙E*He(n P|+aeb w"Xg sfDȧ4ЋV;{Qa\ 8E_@767O#b#@I؆2@9"1X(1Β ~ P#5dz/)#bcc ,)h3DNPHWY^Ly<)B7Ix"fORX=ϻYwg=;IaF/8* COzXJiI9-\B#*^kN`IT׏Xei"}q-v" G$GF-|,iCz;x#IV E:t*ǻT+aLv"?-%B4ТVHEuEqufi}{1SE5C:*^v(cW9v_'( %?LM}gʏ?A{89Ak)Iʏ@n'\?8Is6[i1 b*+#A$s4XWr>O_CyL)\Xڥ&'+ӎt LB\jq0 <5F"8jGiH 4/MAkthn~4T缗o XK0QiF@ Nbpz:#SMF81(OL#F$}T(SVy"!TZ /ѩ瘥*m9& ZtXFn_iLͶeb%8`k1XBRx:DlUD*ZL0c%7](('h W\R>EҞtIekW#\Z+%igt*x*f`Bz@wBNubJE4Hk6b 1TGiw)N@D2y=l{}T<#e`GA^0 ."4`0 xd0̎MfI &ʪGlTnû^r\n:RϳZr=0̎w1ׅj"-!QiŬp}[Mu."CM,ē&OTˌ2naxm] zzt.R*82y~nt.DgjJYʈÞlx[0} Sm#߾t;ݾ@cܾ}mPHC=a设sՐ0;fL} jU|]BCU[C9$V`|+P^8Rt.N9j%Psřj7;P76T;"eUO?]g'X Hò^],njjVrZuz5k'Mڸ6?tWˀ?km$E$X[yIyLEʒ/[RlYTT]=;<<7W['"v{rDs3:/jsK=Uٽz%?7&7zfw^T>&ю$!\DdJOSnY~Bt5hZ_uuMunǐO.Q2(Y-ηM7qP~]D'v>^D&vݎ!!\D_ʔ8VGcjzGBχ^u=/c?/-F_]aZǡ~{W41Gy-g1sTޡDS49E⧠d8T(w~|W> BB8 tA S xN}I!cwp(4{8jֶW$νPͺGCuQVh:}TVTEI-6SuX{e nh[K$5;QNFJwvP-ޡgbfx&n/~=Q}P(%Mc gR6tnEbXPhg<zyqbD2lV:ىsS6[Of w|,ӟg4'Ƭ\~0ħҟe,@V1bnfQPe1n_sCUP)`GMR.J5;ڙ)_1/\( ?B UPf%s$S@| ڌ3zRLt4Fi@֙P2_%?O tNӻxp못V\ @4Tt=kl0qo.xN}C!4ӬY”DPg@8K[.ɞ5[ pq$wB y"Ht>Sqh0>!@$Kdd[h]vp^禞`NG_iw+F %. -Cgd#{{ ~lߜi&N&H=OK4)$W:O>7?PA$D++R|qUpRwViSTrss=UA9i {g.P0#m33]8q8Emo]ҙtUA*y( +o1hid ),-WxLk5/=Mon.Iv&b~ɬ-+H^Uf56S٩95:c'ylIٔ4i4ȥ9a }~V=q制2_I +6^Okl75o[YT7?l7u]ۇNP cphL Hg]鹍åQ\,Wa@ucC׷1$ntV4"izF7)$PKKj @'UTrŦiOwHj@J*O,c!qAHIN!K-7e%NgWi@g5xI`u( !"im`UѱNJ>_;BBD4eD"pJKxEIk(.[hd-I@YKC&Uz:r~g҅=#hK@XJGZ$@Hmmyneri sʫ崏 FUR˼#/We^3YcK өg;졅3$khNGv aŢ6V KW@IA/ٳ3}]6FoƄ pA'p" o+r Q:1y 0pp)iC +ػP`2-ϥdZaHiQ25G+%f&c!yp8\EwU Z]RK PoHE!g-]vp$(,LյwL:)`YOtPkݣJ0઒XfX2*X0N-|ZcKAp-mBK͠1>T3w'YbpӤnքjOk.z.[h=sAb%h.{wL K#1tcl:SO j3M:_Kݴ\@w>ټ\tl~, o|ǍYn5˛7wx\= 2zzbVlUz^ՄSd\|HLLB J6ZM9xDo%-?z/_&HM=Mnzq4#-ݧC8 n`$ZSSj8 =Z =ZPE?߬+%]M8|X\؈c& MLMTo|I"LsISKh)ƴMeSbåIٷ]~9i?AƯXUPI~}%cdŌR, B7:О.v2wg111tEmeVYa$LA1U/k>!J0 b#$ic:~i3aH'%\Ō֖6@|xdE Ss*ײ6r=, `%K4Zٳn*kvD鰅)o)JGN#V. H*-DYaGhMmD,DRo/]8e6} E'`%hl)ٳ3nfYTۈfK#h40QTT(щԂPTDsa+»>7gKtX͉Wʜ4r褝1[Ə?gy7Z'T{; 1 \Sm84ƌHܘΦ7NFHF?@X\ϵZq6з*F<_` :qnvw{o"]T,I}C^Ug\SrG8|jNsetw#\G6o[s8: YdvGPXp*UXG)qTՊ{ZcK09>&Zn"]ӇN3j) $p(TNڲ };!i-RJ lifm: eO9a.N+mAqChAKQ&}Gg?%ZN^Xl3UyF6hhopFKݵFK]1- .-m%J3k9A_ضR2ܶ]KYk$8R%`\nK1FGGfHfu틬0"M!n,A4l&Z*UbFD>E3>|ڱh_qhZe=Om/Y_- 4L(=Ry SLb^5Ccoڗst ; F RU|[3c O+lT S&(D jZ-O6^\omN6{h jG~,*?Yi8' =}惩t/$}ݐJ) WZ#B+ZQV+ dY:UTb ŭ0R\Xaq%F ;_?>I5ۨI3t7& }uU~,'@?ȅIY >Tմl,z7[M'`AnOhS Jθաm u[Q@ ȢMq_sJhtT <}qε$,ZjO0\-!s)JiIӛ졥 6(' ńiɮ\(}h˅ S[.uۍTx}7z,URV^s銽` 0W5@#ƱҫjDIf>-ߎ7u]ۇWH/?+tWZچ@u@g\ϯo#bVPָN5V {0w@ T3[Ž)3W?4ypkVuӢDo[ʋs 6vl?$@ J~`i"Nwi ^X- x\ e6SB \UY3ޝ9x,P^L\ (̪UA}]h&]}T@Q9+޵?q$]uP}}umI*vi,C+fD8e뱫c\'4YȤcg;0bTle%AzL,Zi^Tl)/ =O*z7.䴸cU?;mrW9;>!&2&`S FYXO[5\5b&6"KJuEHj!TN2j0HRFO]N)0;"dEdɊȒ2mQWjɗAꁼyJeafXݦ:f qA"$F=9ל@S*"@HsFIIR")C.6".Cֱ+f2 Хh Aty_DJYRIO=5'1Z̝` F8Xr y^yɋ 92, g:@U2 R DUDt9ւjvAE`Fp+ֶyԌy,,YYdfE }PM l0XY8.ek0(Ă0hHO@[R*Q,$hBQy[tEQG"QB@&pb3Lŝ,5KYgR>kVg-% 3@Pj u}[_N>o?? w_s\Fy@%8l[`N2"0F!80 Kf= 3pp1VG"* Ԅލ?;HHwsX)1q1,2G, `iC\Оb!3A/5]+ av .% Q!4t˜d3>$64R5.Raao$4pa_!x@ 6I0 T}KQr9^ptwx%"38f! ~JEGWRLc"9Gk|~i8>l1t_+?b(y.Aurok;ӾlrR-!('{P;8WhÁkd;ˏa <|9u?3GHt9F_lR %^cp_ w??. Ov`oxKg'ˇqHW7 oc>(gww =_fw<ި '}?>{Qo|fx?N>KcDn?/莝Lhxv_lr[qI`nxt''#uϾnN{zʹ`wȵ~WuzM#MpCPjq< fD\&Ba .~~o:uD 9<"D] SKՐ1*2`923z.رvZ<9N Cc4 $/=0ޟb@sGQ9l-l?ON z4ޮOskj~oׯTZ/ܿeaqٖZ2XPwTeW'Ip>0)ASy{{U*hWZfc^$R1]J\F'8HqZGCH| 2z"ԳH$;_tZK Rn-!{*6k<@ n9  1K9A3,ɳXɅ5rZ7>w`L $;lJR $=xKF`9y&Ƞ5J! XHeq=BROJtFtڰc Z Μ%VJU<2(A;) %u*Rb' Zs5WZs5WZs̕ʉl*<,d) KN׬G0?:d]=VY b;jtʶYBN JO. ^՞|VL.5Z(BIQl-TFeJrD%Bwը?:/ :sB0KiKTBi,UyR,UyΊUKhXJ*\-0y͢0 =%IT$K%IT$+$)CE<QXN4lMTnB3L$G˘4SL8KK5C,UT9 $"%E1R-DI c:dv4c`-pրKa:'[xbp +TnwL.^?~Eٍ1\?cICEko߾]%i:ZAlR0f5y N\!y` #0tF袕/ՎH3tER86ZMB(jh3R+TM쪕{󐥌̆SGkoUJ 3,CfjvXVeRj"1aBfyԙ.i 6!RBoz Bz@`9jQ!iP:0a77&ZI00ޖA:Qs-$UMϠ7QK E"Ooު;ݮ "BKrO%YJ$fZ09l2 -Aj5\ȃ#DL#Tǿ/ϭs+cj)8^̔"-YA&2#5̲N*7,J x@`0I9 !c|aE=0T qtgyH`hll?#%ƻ%^} ԽL߭^xooKyID"e2 XA#Օ`7:,WC.B%ƚ -!  ss"8X=.0n aag$j%mZ V=s{^Z>獉Vqې# iT[ؿ~^j*qX}@5%j{X,.ӭQ&dmUkX^Ja5[4j2gU5rOKɫ RxW@{kʭQJ ۳ri |%sғdO~()R-hkaOB<ݣkqzd$+ Q! j"_[ԽL=|JX'/KI&ωJ5-oMCv\ІEZ. X(͹x.喝(U>q+PP+պ(UqIs0D[k<\P!պ(vPpVƓROדs B }J)]w'JW_ $YR \ rWxd QΞO,403SHZHHN7Znb`%[ץ YDS/#3,T{fFJ>zN$BqVX&.b-zQ4~Z(A]q+˳}io!:~l~UIl$tR8Ը5* :څpkB);.s_ $0͔񞧾h>m[3ӜщK/u^mc{ƭsnh\ZU~p%Vj+k/O:&C"Slk]1RS!kC/Պ,j6)>6t'ws wo>r wrr/KezllHxbn3 ,X KJmb Y@>8| \̓9uoXi s!:j?^۬7zWͶ6VD& bv)^ƒsw_o3Mxozڶg+eTnPٝ7?_ d) D#\m d(Aws=_I'*1L_ ۋ.P]H_:;Fk{t)j2:@M9` աT,J;$rZ&B(^WR6;g{ِ4މ֛3>o0;^^(K7}L"Mܡ]ʳ5(Lȵ(mW#"m{W6yvig9 uqtG/1rp[o$;`TvIc,ΩJSIc*6% SpؘB sQZO3ݍ%ț̀>&}w@qϫGthы&ohPy mNis)M$Q"ݮQG@+}nn_/:;: <ǛPW/;nٷ%TA&W :_G IfW T!PJH1 X;`c۝kvkk/)v:åhXTVbCe"$Ed2mJbE]E?*bmr9uLG'1㟷`JĽv\/i'NBwEirɝtlddvm M+1NmjB`>$K묙ywf:v!ԋNy}gbq &UӃt"q֟^l0!o}?OyvvBݲ-uoF3?O˃_;Ǭ#k3?0 }>! 9rȖ_?Ξ{9㥹'ZwஐBe;E: [ittE3ؽ]G?*p]rYE'z0z}5Y 'loR,I;&I$*|MJ k!C@Fjkuǟ.^?]t#!^64Ɇp?<4S<%6ɷl"}Yŷֿ7f'y,-k4wWx`} Ӵ3*?_qXa4Ki7>>]<S4Ľ?&朒|UټP"[!ԬRn[$HQX[΃ޕkfDLb.PvQ(^{5/~o9S?<jGF8w>9'tvt̝tvs_݂\Vn"ϟivR5.co#~6YglUl [ՓUBSrsΫIW*jŸWR~%#j( wAژB* J <2ז+2*Νzu޾'_;ХB-@7z>i\FVyOOϭ~R((D 'New[-P_$Lnz~He!2ZPl:hSIiF>(]*BzWS;&Lhݳ*ZF-Q|Fbݾ9u _uYyQļ䑭SGo>벌T t<{QHΖ"#.-1WeMPIMbքhDU:=pJp!~?'ђhIE"J /YXvsl;%L$Zވ:s@#BbP1YRL9+CWeJvL>íWZS6!`Tx56~%\T;['[RV9pzu$NartɤH /V@4m6' X9 /ySz]L*zʊkH=hTã^^n4 )Y}geuShz72i8޽BTb893oCF& (՝$%)# %ҁs Ǩ#y/kyAYHm3'!etk"GABABY뷊/75GwceLΈb?.XZRfdVU2rZJP v]bYגY)(/8 K_ r$YNn -ambPWqۆJS9dqQlȖ$Ц,|XI]bX*V S̪(j䁛lWC8c70άe⎰q^ᒘY'Gi!h \2܊`%j t*=Ȭ*dT`$+6`b}c;YT'j%4fchj9~bE)|}FE3]je/9сdY0ʫ%h-Rp 9 ))H[ "PLP`YwK4QFUD+h Z*ÇHnr(c$O)wBbƷxL=k Jm ڧXW&Mc 0dchy% 9 hy4*li&fMk_Y Bb X|M3F]5D۶ѺTo!+j~x촞oepy2 =;Z TI'XtIx05yd'[}*m!ז @jjI>aY=\i.Lޖ^H?'% OjN5 KQ9 ^dRx9VtƦ5CUDEt-ޤ$Xs 6b yEbKxNWSCJi);fVeͮݖU Z*M hh(cYPސՆӾF5j݀]+ot櫽0-W SAfxg)?{?Lf> k/~~v'4liA)3쇳׷4͚eP*4;Z[=- 16?$?~kTw'G]ec/|z?ȹ1Qh/M=f7x! 4'ܘ,Vn>|֮%;Z+N8;̺*I:eAccpL4u1dž*Ůӝc[eyzl9aЭğ<~- 5•c̺KJw 1ںHs31.-KՍGkL~UٟgbwMn)NPxi*Ng\.;'T FKKYtK 9]Nliwvn<nj VIץ{Pd # D4vH7| {n x Y# a-9msOjAW%$T@9/ŨUnXwʎ5ROɠA lQ`t=HTp H!?H0wu~.?hKpi&0D b\nuyd~ *#`D+˂E' EzrIT4&'*0@h2&x2b~Bk9hezͽ%,1&̃f23 !KQ8mvr2D6["G22mh/YXu[ۅO6UIRHzWA@vKk|26Mh ƂU"!(Rњ9yC¶Q+tr%UwU%iGH< ڋO8K@Dþ_MIUXmk}ӫ(-r^7ִR(tJd)h~"dS{#P8iW9C#g7w6> h($AT4!CM\#}X\b8iqC6" G{IWK/gm3QJyb㐔F{[:m22pet"YPaBBC(|YkbP)7RfxN@&Cuq_v}l|]5kl+vcvq7w\tvݬV͗_5wn=t!,f^!ܶ],gc}Ys8gVZ6t-?n34`.ɤ)phؐڴX bRD]2=TScĝZpFSp#4v3n@q2'mV' *TCMLF CkZ 6ڍH?i0ŏX#NZHgءJ&3k$tP6gNձ( l'N.gxk(R@QE=DhFPJ4Dx5T[WƷ."~ɵ  Zp  9yq~z p8@GxW}gK)n B= ){'z]w:c2,>ĉ'Y9s4SoLq2Ϳy=DpF.$4!kf0wQ #L%hsK DMoKfD}UA̪̒f^JFtJzXUm+﵄V |Sd`Mh+EB{r y&FTV+۹@mT|KS$`T ;(OR"Yy_01XxJj*ݕݿ$(Rتi@Qa-DU gb/ 3IU+R2!\pIB8/Zg*$Nѭe8N=-8_#`:j9b1ݛk{{?r,@ԍrŃqX ;f_(*=`,%,aojֹڦ|?{omiM(u\UwM4`x٨H.`$ST۴H`lӠ (ʶYBml܉S`3ŝ8=\,'*%a,q1wĞN4@Ԟ+ﻃ)[D2V9m -ϱ%Or" 3:ZK_ }\:Zsĩ:>1Y\q&=Iqf1_׋jїRmK5XZf2D_@. .'S4R}C6EӚ iow;QM7_oo{8t*Uv|fq]:ϮòW6a9.Ͳ%ݞ7YecF$[({ذ\'k\#qyZvdI\ާu:iyI\}'^?c,nٿ\Il_з:Ldtyuj}QN{A7/_~("TCՂVkJ"|gr)8#ٝA1;1Y5MMهo#M%:/W,gTj+D&4&Uy0Fz7g~|bi}_bCO-;l²ӓb~{tFJl홯s䳸? V䫽 w9MtAi!CZij)gNivFzmᴍ!_oL x^_?Tf\t?m_cm;ڦ~߄ޡnP׋߫xpqKPƾGuވt+uh^背6@vؚ cGGsL#bc*6(܎*t+2TAoi%EoڌXv|<!&F3}r4d Vْ&>H2qW!Qʱbr&Ϊ?^!B%,w|#iWUm~8*6Z&_Qp{ԐkmeKOp Ȕ$FlL~ΩHN!]ޥQzsjĨ5lS Q|=0{BDa[ǿTKGvvs|=Β, z_ nHF/9+bVS|sJnѨ:p)Q=v/eV#^cO{n㱳M(6˧f=UĐ#)Z*6u{(4xbG6onq݅R q5QȤ#taߐ\ˇF/y:|O?&ӯ9/D"d)σ5%(>Ŕ aJ ٬·ONN#N_? =mgҜѠ9r^rr$v[;sB槫|?TsCN~ȞBHnQ|8_t'=xo&{ ZE4Z䄋4x'h!ɗ&Zt%ՠ1>+=I{iiI#TN`NDm FTt`Q'#,PLmmcdPz|=Z8J:޽-laQ ; ~P9 /xۧos%\^xe3F{udIKA mz*̍VCxG.eJT>xӰB'=܇-޵6ceverx'`y/\j?/:V˪Ӌk/q)T3ĮGZUm;"2Q-vxz5bj 7\ؕ! V{HζTO'TzB% w-oK~!yd(1ވQ^ThtZd)F}Oz+1kI Ewx%wO81piE$L tYqgŸ{}X]e ELdUj1JcOj=1 \ʓ7mibr8v8qՉS+Fg8d'DϊoQZEiZ4 2߿ 6Nq­nq ns`B*:#YI,0 mR'p?d]-ZwUC[]nwjBvM9".`T4R dS$'9ι+gZb50s95B(ݴD3C[kDKX_W~=MaUjxW.xrb"R<8>^yuW4\'`y]d^~TQ +3- {* +V'QXGNyD!wڻ"_\}M\L$Ր>ă@˝wԟ߀z9^#ÈW|6aEoA&k[ KI4ga %CnTr҉O DΈ t:=ݨh;/)wQv#)1-hh>N/#uoc/[:NH)Fאu֠At ź~'jE-RA{n&S ׵pj=̽E{-%1M_Tcj<ݜZn3vN[9iYd-lIQR1ٜrtAh:YO=oB1^]#9mL8|%NAk?g:c\r:jxڕ!_{ j(OO9]9V4&`qp$R4L_۟7U뾼nmWӿ.^O\o~}(7,-A Y!e rJx4eIゃNFIhf8+qNYzQD vnŵ.Evq-.uA>1.%;v (q\jqP, ^|tV4ޛomm޾۷re E n|JS]$f,-IOҢTĥ*5Ǒ+Q}oP\]7@l >Зdf1&C/JSw-F̂ɹ*ϿhP˓+kVy+hd@,11=4pWE@P1Z &̸@34@*7N)TwjJTrڨʧ6\ZldbړA [хh(r 7""7Ě) )r_O&g4ʍKO!of=G$x@iuwSiQ1䨁 ]0FC0ifЇA= `ۈb}Zd9֩V6zpoÎFױi1G?望DfQ-5)ޗPD y8,jf1!UJtbjQ'}_{5-6Gx+̽7 FPA74]Wj_Lx2~EZ.}:G*y5@ҟhFpASfD"!9@*- -yN3FBSL:%{QG5/A ݅Z5/|5 peGb ,y!j{%1*/H}^z)NkIPEZ ׼{]d%`,mv9٥'ynLqHH%2a)1ˠȕN2 KJIJ@wN. @Hk d|H"P==m5Z]nO2" EG_DKnd"h:'VҲSEM4dauzgzקA VzTSl"!|1BNIeeS4j:Qe3-R T ;k iuQ|XIȵt%4x?牮vfG/bfA]RJc5!+QlqeC=5-0¾Ջl^E6Ihke!C, /V? 'eR>rkəO#nZ6f IʂS(Hk%Q)$%jǻ6C]nQ>r:BŒ7υA_LiGWN/晹cxxz+>-]nv.( Rl[M<'uN0Ih)Ht)4 q4 !Z |{{(#}Bu>cN" ~{H0NL`}Қ򴟰ߓ|zO9Ϲ4F"ao1`#@`7"KMDW犼%/hžgm~|lɵrҀSB)͑`J^fD4A[W(}yO*ގ˅ڭF7Đ-T/!X^uz9hG_'*1=n$X;ݧ^ LRR1@LI*sT."ɱBWmCW p MV aع‰ɑ@՘IpKm$W IwB/S~$?^z=W>4czo&{B+Oj]NDlT L9s5B %$XdVl$|\u$hdz?~<}`](i4 @% 5)zۍgJq>TB8lhjxP8'|`:DxvЉB*xP8@l<EZ.K"@.{L7Auc&qJ6%L|J|Uz=F/F;WVyE89yE 4Ggq!阠EZ0{ MLL#::ҧ’);iA<楚ЧpbLFX]tz= V 蘨`}14ZBHcFa8t_v(mu`IXk] vp]~svM=dW?Orav^?<.6+@z~؅':RH MÜz n ;Xޯy)V<.+=*vD_kmWE.D177i{۳-}b[IE~ߡ$ۊc'rd'nk,6ՃG 9ߌƢH=r#).>a )vP>_U\T{ ad5;,+D\N0эeT=kG7VD!K3~\5^2̆3`LSӿGXbJtB+8 i0$MD%}rv%gW;]itc"lgs{ 5)c+{>׊Q­=m콄"5ʵ/i~ȗP_eS>^쨛:XǗɰ*$bAjc)qd?+O^GdcBt_c>;u_+=Xh n.?ЯG&(o?W Q~JɞT~:?veL8_Urjai! x%1ڛ̯qLدo!%"ĻBg/.#~yUROr>JM!Eq 9 G0;8ϥ i (Ų|wꃓDC$l7o F Ngo4Xfl` Y<(o֟'vGO>Ne>6K2va.Oc‘l6ΆZi:= aA~×`dl?7 d.,=Iq8. < Ly;=0Arg~ IwYiwmi'ļcKƭAx> FW/- D ,9}sǯܤpGl8F2zl<2i;?>3g`hv߾|sҷ"BRr8?]^ǻYWg=Ȭ?Uxaݫv>$K{ˣ\\We;ݱ;Λ֓ښNtr4܁AA}oqv}&d-rm" ډX;Z9R36pv"{ m^ߍ3ch;4vcƛte\yu2s[ASNRVm`%G}Y̥{LyOf ᭻hMX ?*=O\=CglwZF=}=|vz);~-] p~mB{[oؿuf23CwWN_Y  d<`X fFaΜ~`pdXH,!ZRB0qD?=g];;|-n@)%L9 't,#_p{%!/Q rqػ%R?et w[sVYм  }d+p,#p!< /OC&\iM$2Vr%I]$,DH}ÈR̈́~7:\ք ̤xLxpx$b$qep{n;ACߛ*x^y8cdĤp$o<)ieidh5D-0"_*x+Jd.Qv0A.w {:r`:N'G -LV+Q[ %[ILw D[Uߙ|< 0=s/{ydUY!_\;yZ)?НmݦAq q @qH5UDXP-ehIQOV޹7b$fB5KP5gu" *~}7@$qgo#PXnx? ;xkYq:K&qO?aOa L;?ž׉?miN,IV~pn7 \`BuA _#b-̡΋?(.(n4A8z$`^pp>C5ʛ|Q I\k^_:z_h}bSf4))L%)*eԤIrӘ^ Y72ISf|# C#X,ScVMEyRD9U 8K*-u?uŷî~]@vp8'`m"8fqcŧ*>>Wrr]:qk b~Zm>'&9_Fm&Cv("E&i>$i5vH4KI'D׃vnS" ë~`%(T?;Y>g*x:r. JoTjxݲs S4 b9Y "002gu`wLGu``[gBZ˭5&LG 'Y'qR%cP BJ,F /K&<"ХH2M$7EPzwn` lاy6=3i ty*Ԫ&`:΁҇ Y . 6y*CJOF*9|@TzT ĉ LzVƈX #!F)aIsEB;?"ވ( $͗UYJ16w5?TitPŮnSŋ몠L_> 31)~[wno.|.ȣ /} Ϧ>s$z=- rޖ3//<18&L֟:)n$j5hO$Iy3a LN8K"MTRGEs.q$dl f&11%f+!?OseF9lӎ<()S[x?7 &t6&`g=3*YTO.2;~@T9{+>\j?B`p3ݝ_Z~,_^/m<(Kn|Z#[m.|[} UZ(|_ KEFL\~o%pMO׹Gs4`/(C^'!cy{T ׼!׹{ R;uej,>*X7֚A9Bw׊5 O~szt6_j6P^M'nTtvLRS%r;%p;{? W\GAq/G͔$)iYxfK5z=SS4Q+06LLRZIkupitʖ5ʱ_5Z~pZGШm,Ǎr 5?g0IHQv@6 c.1 HjXrc(U錖JdY#KlB#Q8e'L~θiGrM;J$w,`*:*&yNxCr - Z{ŋk>8!8!\v j #;}67cU ٍ7RQUriҎ_&I;եI>[JxL1R%ű4LjGH8 c]l#dIH >iG|~Η#i RHTkHt/?K(_{mӫgGz7>h=b9uGgzᰲ%̂a81XiFbTlP쐂=8E1z rA$ڞE. GJq*2cyH&4(1C+x/tN`t9K4"%T8NbNCI,Bl(leauԨDE:  08h*%E3x< PjQ"?b)gq588+k^$Xc-Nl45 K[*ܬ")IH }ERXH]H+ ̻bD;+Hb V*Q\"a2+ #\plCbkZFƯ,ֽ^4]xyYD݀ϑHM.Zf+?vy&ܢW]׹,nQP4(*cbCh€(Zs `>ab @`ֱN LI0bf0a)m\Gl>}8 kȜ L7FZcDcCqh/w" ė%JQ\O(&b"2s>U,38L8)`~Qw.V+nk¼rSˡO->VSukm,Z"v E/"f%QU7G#w @U-) fqOO7D5^3RA$BtLl'I2˵d5őJL#^baʇ b NmֱF3SM,{5QޫiŒ}DG\O, Vz+U+㖚v-GQHUg]%L`4Lm) 6d_*==Zv6*U}%[~zE=|;{xR 8QEygg#1o c!Anv[e~O[sp|p0aӮx̿aQ|f"#\|:WH\V5YpEwFeq\y?$e+˯p,?PtR{M[nb\1CB}8Tr7#+$ì!D u} ~Z%J㜻T/rw۳V!Q\^]]%Xsdx(~ ʥ2햇˴*9x6uw6,'V?~8{8gr&.mx3JR^mܩm\̴qgygau۰jFVѲFWS펫ݡqb>"#\Id=@s!ktBdnIw&vh` enQr`{]9=Do]O|;r[/r ؇2MNylQ4ٙ+5nfoo<ܵ@7')ɉ3ڊ vN &X|6&<A((|HTk=wʑVWz&?z|^:I9ԡWJ]2tQ%@1|*$-4zH~OI2{GrOKX.-+H>cկvę 8rFNZ"h~0GJprI@pj ΁NQ =)v(] t6,@p$Ym*kЭ.ǚU^fM3fʧƺ7ɸᨘA0t#+[¡ 8~F4NY_X°S$_HϨo-PzRMxd>cI?8ܵ_1BM6nS -5 ?)f>^)ˇ7~AYQ/-nR\tS>kWh !vj'9;a@s S&_cON;N!C ΐzA\H:ٲV= HR|.qB~O=K?7\-W7o}tiM:^9{y_1(e^ؚqY30d`&&ُ3HyAwEv :HbNgq;t`U'b6P:+ `Đ P b?FBZ_4[ꔀ\oLhe-5Ԓl81m#`]uk_]&7j95 2qfqu#lzd-'hVIRnkϸL/MMH]u$gm蘛XHyNq񌇠eA ȉ$4aIx-_N յD UC`kMд)nl4dya|9$h ݭgsga?J 1`iU܄6ϙLjɅU R4>0Ƴ7 2 %AY%Ό:I=g {n0p<75З >q;:i2**9?2 "mOmXaSr6I fߩGzg$N='c2L{s@stnVv[6pGu}^Μ n%~1.\XnN_?Im/j&bMbx}:WQGMYbgz-&).CBj| &J/}̱%<:ꧥ :{son<|{w]F?ݭ?m^zol!.臏zFH_Z#}\#,Q+WC'2KJZޚ "MjgL#\ ljPKMfQ1;R_|,_kE| ~Y>jI.ٜ L6ˬ ;2%(, (,c4J>>|E>\t3lrv:zte혋*Ҕ)C D6,`rO>9^OuAE^G$A:1stNQ7:G#eF[vĔ ?1)6rVQC+_IuVv&mBM +cQ y/Y sU@,mY*, SI6EY!U28Ԥ/R PcE> ~ *)>#G]f.Ns6at! t2K-R6@Z&p6d"d_ƃ!AhBQ+(+kh*b8VM*j(\ VvB,8u w~ϩ0YdZdP#6(YɊT`8^$k,}~牞[>%0q0m#B LM&:U!"YM#Sd:oHUd =KC![$bbXr _Ze\Y6[eX! NR (yԖɛ}!ư0ZKmoՂX?sz NZ^h{-荍F'U$ ?۪_(]Ȗ޵%%*;6tU50a6b"u&QF!Rwcb~Nv񢎆fHQugуax=kI3vOV~Εfltis)>& u㟾>8v޽8gPX%+ ߃p83 HXw'ֳ3vpe=rO!v#awWWV}*Heum=U$Hu~}aX,w9F}Y[&صlֱ1UN[}Vdņ߄$mQ1yhaKd4ɥHc3KM'Ps\> 8~%GRnnzN'}h7vVb1nB+dYbr]ouP+CsՀ=d/'pr{@gQbr3KO&\I-3bH[h{%[v6~ ^i )R4Ր:e{l)qu:A'n( ( ¨:Vz>a=kq$lvNr` 5`hNf~ʚ63C4h5fc9]&l5Lvl{kp0P[s~w5qbR3'ɰ6'H*q]τx53Rg%GIzM,8p["26D%\K$ཱི!bPTgdTy<$L D7ߙÝy\;ĿN!^)WXzFe#= =JZ`?خ}X1r=U*A';:Ot>I{XR&9Ydؿk5G"ȮFSF@=HjX?) +b"u*qBc2*%Uj>WMzBE :h I6*ԤLsMɹ; C#ӽZJWe% P 1kIE7t}%1 n?ZIJ>ßǚj4#3-d!J](]鼻z۷݁TEƢӜhUXO|&g @@ocPS[gkfr=u*0sѢnSC|c<[;ʉaYH[[_C%aJ{ZS3{5/u3feVP.vF;q3VB:Yjm Iu߁V('HVY ! E0BK'v9Pn;&\;b'j/p+dءmKheegUqq"vV`%'!`p8X[TV򨱩#Q=ɃD  D`-%b0Y襋Jx *, ddzvczUHd+k1j<)z*GGgctQ9ČП[>5gf`)۠ne\!YKi9E]R:Y\*;.sZ8c-!O rޞE 4mt#uZM\s7T9UmoTZ y9c UۚpoYt `-y5KBi>`z=ۻף>t!5 HZ݉g(Rm{!w zBb(n#wX-,]HV24)C?jkݭU:epvVXNnQ֓to򵣊D~_>+NV dqy-eO!ܽ}}^Iln?ݾv~vs!7N>?~|Y,%_ߜ*rۙ+9 >9SJH#fVi_]ݲvI{Vu9w5WUgGa`nXZz3^_v^}Cw~jA4p0cX1=jC|Ԧ߿Շ۷Fd;\O3:+Qw9. (;ZpvxW.}=,G^~xΙ #Ȕ.d:߶oCcso(weIzY> ;X6| [6EiDJv{}#yHţ,fU-؂X"22"2ӴX;짜2zDd4g-lx0EJ3/ ˏ_^\t^}WˡʴIao_ɌnfNP cDXr߿DTq(ͳCH2b?4?z/ӅZ^~}Ƀ%퓯m g ()Y'(WLD/IT !{>dwOӏ<~._|c|y#1S8Qmj8mJhD6S[mA?!UpugDdc/JwE#ف:~Q?f}0]cu઒WW̮+_??no+uLUb13Wߞ!pՂp j Q 7^*mhod"pkñ&kk9TY,ւU.?i{mJMn> }Xצ>Ky4h0ʣaUU۶* |C7Lr'P)1T S2,8XMĆ ,k- 䶭em|?p"uNhemO~|XmlrVSm|tM.×`{~QѤcAR0ж'?R7E? XvE;HWΌS8`y1&8tAKL<5P5[A{gOtCZWq--\̶ EN MdǖFͬVL:#Z9E#6y>??~5k)]F*֝ɓ1G"sndls#2I oAFQa VkgD5c,}BA;2VPۂ0G!!2xW} CY`eL OIQ3qq|Z)VKINpQsWm~1i[m,wHGRiDyWA$ B!,eO1a7%F=Q7(RK则yбP IQa {7bWn)p9'95扫ڎ|=ᅤ),{{hÝ`~TvrvbN &>k'FjG-%0Ja(QUZ2"zp}㏵+iCdZHnGYJE)jDiM2(+(w9!=w_=5V(^0)}1R5c,:P9Sz/@,NMrfdX){x@ 2j[I隓7YXkٻup: r t1P Kwa4~VߕcJc*U$Af Km)q{g. %iJ>(/8sùX ƻP; JMOΕY@드D` c*.Ma0gsKr5clhkW(2nuwEr~gv֧C8[ќᶏU c_}^s˝VRo2u,&Ims=tKRwLZ3 EH_U.g#BJwQ3שA uZa<ȸ\+2Yk'^Ac?o7U\ tT\vǕsАXk}WJIW\BJbf~e7u7t b?HJN}v#S.#9^Rdн{WFvO_ɮO3/ .]Pg񎶗_%A Rb #+.e(K)J/8l3z1Ʀ݇kha{Djw]> t_0sP ZFt;sdS:\^:h'@؂hƙ89]rnIb.Og>U$u/D"5h"B2}!cEbHʕDWx8Wx8^=VDwXO4Y<e鉔xBrהj[:$e+"y"e~0O3"ǸmMaF>>/a4ѸFjX5<̹&KF0aZHIIGBv <;P~0s%fEjk FkFŀZ| 8B,ӆ8J@NULi!#yñ3CNKr{U(SBy<$pL?8T 060h 6@PK}am[ؑ3倁(l3UJd X8&JeJLdjyuNϼCi/S5J S laKuWqJ 8 /DQ)\Q͉B}BXreRPN/S|J{H S"ZЅhК HKCÎ^| `ҳڇ/Α]9IJ*xZ* ܀G[+qIio"KOTIg>L\y*&ep_~ kQM^\达taj^A`~}= |< O| nNr~xb9iq=x}uӸ|?z~7~{f *W_oŌ jt3\{wx$O%O]M4>1-X-ՖNǠNӢZ2->Jq@$TD-̅f<숇` uԭ_;7Ècw SptEw1 UdS@w›@˶p<nfNwX{rFn *`6hL'=?yz{0 >Lnn0C0\;3L`cq 8bI0L,x/F1Y|ZPKlify4рCA%L 9QĞ47/&C< \A,Z^WȊu/RcW*>B%iz}Ve O3Hd?;1OHUmpG$`I8kTI@RJ`-:T &JЖ^iGa~*a+Y-ҵ2yi-'ẃrRJC)+)3_84 K@fStx; p"M+0b>a*%3Hޘi%3^>ejF3=3#٩0wbnFN9V?&Sttv4(SJz*Ifo6ͻ-%>A3;ui'hkЄ!״ZG}D}r_v|xqSHZ:WSp_t 3tn~46Ӫ|-O2c94$L_ açLBۇkḯ(zLӽo蟢\#Jl)LyH|DHit ߽zzt Zi6ŵ`IT^'zc- #DP=Br8Ҙ~^ZFm7BQUbkGDZDLyZ?L 1/-Eh.GYV:+T<a &p/ƆKC+C ra*#ë ^]g#k#^(t[7zn^]4'^ ͪį{͚on%vVϯ$BK9 >D70:T8e(K%R;EBBDb`w1(dZi(]cV*gXiQiyL{1b;cY Z&E5ʐ/o zOfU2`8sW8%:$b˵Oq]뭊(zW]mo+BL߃/pSHОOwM-촽CI)i%qݕp^-3LN 1ʥ/9WbeΌC{wDŽZ;x%)%kҖнGAMi5!Кw׏$7qĐKRAiIqR(@mo3̄)0!},FG쩌R2&F+);?"t G)LGr4!˲J\ZP)u 턨(׵ZGki)@]N瀐)M3E`SmXJwN7 ^ZطMU9)ZCPSu3^?hO?kY)ue}E/s4Z_?xAyRB fs5Et]Y.;CqR1cLƞ Ap$$*NFٽ:2WN2ޥ]Qavߎf$WG,Iӡ <.MEF曊l'ܝ_1+WZl2w6"LPPg~t Q}{=-@,Oz'OfȬܴJ>o|ӻwTrsET%FrxW6'J^TRt/*MQMx+6t¿ t?p!$lC$6ǾlyrMT7CHwN489lv5]!rV,bw /8ꘋdEJ:2OŇq*5{;E]IѽMb*VYV*!JUԇpЂPe%'y)v*fKZSXiNŁNu]*lWPTp[R{"Q"B , WpmI6reŔLDȂs ˝*8c<ftT8Y+E *E,%P,QNWBRj˰"7#aʨTWWt$].~xq&RtuQڱ un6#됎sWߨ2| ~v>2ėr)Jl&3)M$mmVD /!ArT:ADȏƘ.Pj`3 #ɔ^f i&w)߾s$6mfo2QDO#5; %G=R$W@z<ډ 5#Ol@2& n-6|)T ~<\8}˫ ye͙TU2qu=}\_@TɰϩXUvnFM$}zoFQ/vu_7Oijς))q'[EDJ DJħ0o'|ǪD҆6%ʌ|$\h7s%6%v,]~i,}}C!dF4O[*O^^hw\d.닋Ɍ=Ն IIO\tBZ T|CfU2 P/21g2p CB'[)缵WZn8rt4zU$kIxga@M9;pn`I9alkͮ s`s"`0>@3ndA\L6s@]38MeWx;Jx{,7P1WIkw?FXkN=Fߎn?:hq۪}g-99[筜==sΞ?gRqrqubÕseGgA~"R(wDk]5qAbCmmGc,b`,tlP7ս`rtjZ~JIzCG Ӻ?;;ib5ԝֹcT27~#MNeۈ\$EmFS"9ACJÄsìzEe$M>yo` JWeDUkT4+Xݒu*QVڃu\8]V,?K,xGeN>*yq+)l|UFW.腡0W)ڔ -sq#Sfd"k=6JP91\~Fp$j+r*:aȼw$h?)!-!λ8vD%)ُCJSr &xT{$av챎dK)&z.gu_truW+dELK}χn<5QEeLaSs\P*˧5)6Ӌw7GcjAc1_$lj+|GYug[\CxD LўlY8Wp2\;y0'NVjVJd+;}q?-;"1=juOg+lin݈ {dϕOf*(Et{/L)W,:Ab ^ x4"f#izeR9# 1F1KϨe 6tnCYs[ g:Js4Ŷ\9uͨtsSz_z鈐uUGv%Ԝ 6HkNY-N[A`@h#y9*/ ]Qy ʼ$厼5s7036E=AM@W Zr+|`L/8䡧Io8 -dւ^ nׁqA*I7TH/iP4K׈׳3v|&ی"W >&Tkxo v86h 6$)O$I$kM5YqCFM&lL v\)(%+B:T9egA Iݣ2|psn++uUԸK k) WԖ*\ŷ߸7йh.ϟnq( m@rm nrѱh?{W۶}o Ӟ}1IZM I[1$ZYrDmgH-,ZCqsΜmv"LTqHsT}<n# 7mv6( o;r ?>+4bXE٠]A==#=Y"D, BC Ć%*S<(cIϬx|z}L8)વ;3큿b<'%Oeu5fiupd5ᠶΐ*f]Y>HcP s`%"V7a^iV8Km;rEshh(UQ؋̠2׈?q˹R7&#jVj*c2#Ug8HqP,dYCqҖͪ9r-(GN()\RmpD 攩} "ijx`ƣaQaZ dh9ՌppՄj90U˜LJ"Gk#t_ ǧ˕_DŽQw :T],#"%(@.|ycy afycf [L}獗+㩪>.((y궏P]h4(80k2@q,#0PseM D'  hW3Ea0:XxV@;[ʙ'!7p%Np-:!ϰ)-XkԤB%h\ xx$-eU '\΢a:,ƫ9J1ETOSPz]#Cm⇺I卋D=? 5dG@6'Zί']C))c\ U)Q(ºLpf PdH g×9IE(Tzg5,$  S Ҵ$ U,q,G3b2ƪFKd~]v 9]&HOF(B D*ST$q\R]u7$᫗`8# >hگ/_1u)/6>=xޘcC-C O1$GJ֎3_|r~" Li̓?{Z5`&f#Z?2\tda6#覷otgBC0' 2@q" I$ZQ+$Ѽgꥈ<D:X#Jdg#G+ags%1F2]+r RE^gCѹDZt1#X'A,Q;)OPTp/bFh6Zqw'SqLuZߵ2N?nޚǏBU8w JzyGWjA^A45B͈Z{Bp=}g곜\ލ(r6=gEe_}?,ӣ 㓣E_LZa!Vm6pJu\Mc(G9U.meƢ9a3"lo<}y>:׼Nȩ;h<.7hLm~%9"(N_D"8>8[pڄD֝[ego>{5agk@y;[yΙ--ꩍbkzJh(iܮuUGyܾJ.>nlkXT(kU*NpF>pp4GrHhRUH璢'ۮ#VPTK[Uڣ38Mw:Qa, EY~RtX7xuхiٺi^ͧ9Zw͟xZJRfoիZ`s E#wreXl弊I(ոJ2Pk&#-lB5x֝j2J5scj<ҩ hM5A=9Yvv%lQ8~3"-{1olz_; F Ʒw 8=)rɐsN90ef7HErRb^Za bK G~swBI ̩XKL{4jA[=4>_5a oQpj_ ؈U{O1g`%CKߺE}=SXryX *iB;%vB ڭVSn>VK4AY{,ii O3>?.dJ?ȔVvrz4!Lxa].C쑅k^h W\Êc]^vqЧ#ˆ^ݫ* AoOtx%M}R!1#%}\Krx?^6]6 I,!Ђtxzc# W}WF| Ӹ]UXQ^,ŮB^wwEllwE;hhWD%JXw<Ѽ~ &jќ-pT3:<$Ѽu w5s;;ùAYn6u>+d:1c˛2mu̚ &)NY]mPEn(V\1}}0YnhIT-M\/M;ނb a"AݹIJX)52K )c nY6(!0 \ISGtb.i(䕦ݎ7fxogT7}X=A=g5sV-;VU˽tjm=GWgp1G@r0sGsVgYq@xy9s-.cw!x;g6ݙW[%84b}kxwnmBQ}yiy_Z'[=V!i޵z G+Nwe htUp݀N;Z@=?B8Zu(6m$x@9Tu%ɈiCKIR8 9IbL9FҢ:pv6=GWcp.Iݾͣr}#ygj2oD1)f+ F*Xb#& *+6" 1jBͩLlk͡Hm48Ыx;e)dy/"S2!2rJ3!=[3=n?d4<{<݂,\ݛYM>s#&}/#`,^ 0?pEB])o5xQ/"qXv48Fp8zn %iF7Or*cd_398{ 8A} j곙<OBn4oj!ZY 0"?@b7ʗn07q|543E3׭ ' 9E7w5:==O{qW'G_^F'14?==<Ō>OWϮ{y7^d|~ @~~ջ߽ꥎ|Ħ?ލy'7)?:5(9M.] w919m~fٔ'o¬᭹< IӗXwog(mNͧӨ@F)L޹8AΠ)|@[Lfoٸr Ofd?,zχq|N8K-l͘I,9@ z=y݄uGz>Ewvtd=@=l<.Ӑ+,g^OC զ Qz|~nnM=_,{Q̵s ^ uҟoa~ o2LArץ0lTÿfL#ӟCvr0ʸ}koJAo]қf8*>r~jϗ`'{hLe}z)*Q+s p"9W`ߟ]Yo(LfIl l] Lbe;Κ\ͬ hEɍIFz}t<1cdء (6ll]7dY(%+$]<2ψcԢ嶰 rvR(xrxG>ÉG3N q_;?{0o2`ʜ%j X(k]W1-*ɺ󫚡n ? k1fϫPka'h:p:ru.Q\ s4`Sa͆EѹÜ3~KN-* 4&+^?I#y |<}Y%s)5ɬJZĻ}A-+Eeaz?&td.YZ{7MjAJ@;(9^~ocA0%NKwĜ54^{;3+nk:Uy?*,*)J6[ޫ6JGcY_]:s;Ԇ8FU*XTM6t]F6KRv ȯ`!)Yq`w'&sB"YgguwżJ>^b8Z嬂KL* WۅI@yw0KCJp]P 9v1,鰵 CevL,Vv!>/JϢy" 5f^?p2r1!)[&sO?ʋKMrzasOWYnhRhuDOG>RcU_*;r7:D=U 㕩Gbz<+?Lfa~M@ rm&ɕ6ST:P47~9,L7ͫ=1/JG񈅜}b4Z _﫣[$ )]QBRVI0Yn&+$e)1%X)s<Z1O)$/rPl՗JYG FHkB bKu3gHeu Z8rϵ3 sv 2 "` <*-12lKXQ91EA.T:[D R89yaCoú9sE9yX&j/b@M[ xJ1eCb C ES1[0XG]F`8 }}<%D Gю;NTTi| G|ALToZ?[oTw%Tõ!j9n D^Zs,$S,!89fG6`Pn 44rG mTdJjx W>OEAV;l`4 TIR&g͒迏YY G%G^Ðt5 m(qѐ&FU]bph~:' 4ډ7{zċS;Ncx329( Y~4 ,d5Y?Ri8{ L{Z)k~ғ0ahAgt(pӫa"iX{|/\j#<6\BqsEVrRhCnև&h'߮ItA7$tQpNc6 UytFyvQzBLE*1ry1UfjToP 1RgkN?;0?^3j6H7]OO"z p8O`l!b>(jحt [sD&LJC>S֪ H֢2$m>1uhdQGٺLc(e'F :9%*IOBmtH@n#2>]뱍Nvغ"ϰ.cLek'g$>b;s˷35v[8SaˠI -eo12u3i_@J&U:Bh$`x`K !bI-ܬn;ʼې 8c.l4}>`}꽐xl|"I2qsqq-[2NyfI6\sc}U\:ۃ/;"| { Ir:N

.hN^x2T#˄h(k8?o֣I^b'K*ї$ (Z4Li,8&:]TIWKL%0 2 -Zą ojY"MxqAlԡp 2S .H0ܻJJ&:&PRmIgS%3"wLW9PVe4&cBe!yE/ %|.Ø+4L`WEł]__l\>NjE&>kz8K^> _hɉ ]dά.#r;EK6RbDH~iɞqa WS# JN<~fk殟'|=}.(ٳe+ED0o?|߈TZYo?{0%o`%5H֜4ӹ w"X* ]L JX`k VRܞ6܇RK>*X4. 4ӷ}4 )Hj j09Gk_DRN~ XJq3P@g Àց ٖRZ"ƭZoPlg&t1l|b~g =wA>:3JlUapƽٖ9B)q&fTQI3 0.SsCxiS>u57EW"D}W +S߉&':wU:U1{a6n౛_qt(8轲5NY]O޽a8df!NZt!F/6/a~W8ӆPW't/6@گGO4;.A\r|npInJUSE}J9JR:hoh"+/>5RwG䬕ĕ%X:)TNK9}t+WpF}A"P 5Z@ 1ܺ}hKe@X3< oo|qHTz%D0t=-O4e 54I !(. sB?1M.eS.C5[&oInNՐ ȅ\<6!0^'} ,JA Z_\3~`~9uGj.ʻ/!-oû#YHSXSH8}M+̗KYjkuv]!ٯKń<+^i9ۥ0~4BT[Ȃ#,7i&mCcΛ@iV@R.Ex&P)r@-.8 dܲ2I3I.|M)$fJґ"$ ]A(%WXptO>ƟQ21t4˘Wz?}GkPK䱗 sS,CȭpYQBբQy&O:aLto a4RvkNSj~;;OMǏjuggra $"cҀLeW5F__%̱5z-7gl]š-s\?KԪf9)944)OA}eOG,yJ ,/6BIAs&:2*V(ڋBAA硲tpƽd?q3o<8Oo~0roJC~5w/n""C4+{i:h<3~ưBxp?x>XE[`MFI݋CdLtcŠRH 9޾W3mjyu4)g0+ _fFZP+QA4Ev\0=puV *@VN ,a@z˝׆m j2. J1΃|: ,Yi0% 13aCN]x&0޵6mc]K&niɥNhs+K(9M3/@RE@"y봓HH<88) }cb $*=jmf8d N(FL P2 O{T _.bNOrBhah"u,r4%J15س#̓}2R ](IQV944Yi٘UВ,۽bۢ@HU7Q $HmX%&TyibƇ H¨!Y\sI; "Rۆa܀Q>|'FןÇ]&.wlxnͿTKRCNn3W$Ȃ5rY'p;*79r4ra'kMw. ׷O {ɫѧfו:zfo:H9U4Ǝa:? E fw0q g1Ggo03-0bf^3`+7n*rQOy:ݦqx(!>=eBY[S^e`(C $[ٔ:IH"/i&tdѰ3d6y@x{sCE$ @[&TMuGQ80 Q7#mY-u.:E|)UdCpC 2@J8ֲ a%BB =u @7#zjrl{]eVݺ#PE*A7<s 3TRkw,KSw RJ}FzAu#մ %zfƷIՏ?HIKM*TʃPqfQQv b>7SBC VAmq;g/_{I<&Ϭ˳Jy%hF]>wM^8 A"f\ `pI!EF.As0n~@ac8h]ݦ8/ Yq3JF}aGEal˳Gor1VJ:g.w;;-yݼRK$Ta{R5\l8VؚoG=(Aܿ2U֊QBmFϢlOBSLNVl85Rv1ңqOb׹3˞D" Y͈2:<.QFC]f#&=gP(L`uG Kqϗ z*C!G=d-FP@xB!63MҵKܑZE" sx4451R)2Yn#o h0Ndzb0 G?$dry6^-w~pV9fkΜ[˜[˜[˜eYŧ2E#EĆb'-9<(awz;Nr<6۴VQxbj4y0f t:wti0]iن=#i֙Z쏙9$#{ =bql]f0B&y~g%Lx!SܸO=}?]2D,$Cl-yq4ۅ /|/xbrY<oh0Rw翌&.&dj ^jh&_~ݛ7O'3~ 0'(~__^z//~}otjOoh y=Doؒoxˁ=ί'-In~Nzs{x=|N4a2Kog~0g2{wLzpQ'q/KY#LE7WHJ eFXH*2l`'JFmh_}7 wcڨVڋgۻ45? #y'^t/3WYxHwHl /+c,G;-__ (`~{'0yӏ)J?^LutKž]coF(!idHq2-U4 kpKo FF}2+o?WMon&ѣwZ\?ǖ8l"~o "Pzh43a${;; >S-WqKȓG}?W_&K4TOq]3Ǧj$۩H&GFȾ{k,8ґ,\Sӈ Ka{ 4b)_Oȭ",Y#$UUڪ[.6I]nNF\?sȥ>6'}7ję6h<62+\dppgge5h3Nސ= YnҚi {yg- iqv`Z#nYfھLaIL 2&혆E#U#nh2O06=XUhq͚0K8 $ r*C<>(zkߔ-'hqbx,s F-ilxwXw;];$G-{6: 0Çmimm9B%`9kݓ-ʇ@bƹhT,yr%hgw(3M<1{odO`'=cgo,kփxxß NErZGYQ`AmimU S#1 q}n%K(p})` Q@ĖTh^@7$j>–:o0u]"M!T`yMKƭ $wtdcq,|yv8V2O^Iw?KqqX_i;}:\6M}r0 ? vA5ˡ8 6 swGdWu')l͙jh55`R!_[oQ!ul\i[8]bfǰn{hmP|ۇbw :9}ӭZR 3*EikwSU6j~p rZ` -8eȑln]>Ǡx dMi,/%sjY?%e8JdJleeY<,<381drlY{$A!<6ta#ϰQf 8wv}\:/O5hzfwRŕr)6'n:Zؽ6 34#l Iɕ8!:cቤ &zq@``1G"t檝'5瓓q/9-^,|*%:Xc$4;x8uh"Sٰs2R)G7ߴ]oSOGv̵k"cI+U@-AO:M >a!Z#%S"[sksC2֦%Jdh}{dV"U-L4А!j|.}Tk!Jz4TQ9m% *&υ3 vlʈޙ)2"IQ~/n1F['Oֵ>;re}IVd b#k7RPfLby9˹a96,ֆr'}8%8G<3Qj&B Ill3WB? b /? _閷aSEiCue6Cూ䝝ν4Op *-Uwh.(uSPp®/qF20 6uMϖݞ A@I "0EBI6r+̛1 qw OpJ'*r.l.I44H0 (pb O91A )fQ^9t:? ' q,>|ש]5=Zw,(i@K.\WgiˉN~{=Xea|yb] KDx&U).l4k7@??6f2̖fh[7Жsh ~:KWk΁Png&#9(;{HؒOjU0s߬hsI @#$U(`s |Mԁ 9F\!hFDx)ǰ(H 2J O0;rFf%F,oukfGq(y\=7c ^BAyk#=[)` k0XN}*$9le1TRx7Af xa5"&pX b-/oՅlQ[@.ۋ=\8G[gf2j-Q)Ϳym :/n}a()q,m,kcEc(b8õ;L7m>r/o[O[}5=h1#Q˅uCA!͞5{DjmĬQLH]AMm5CQhQ4j?yQc`*]1j9%uHZR$8TPa)VPQ*8 !a <H(]n J\x]msF+,~AUVMm]Z'_5XPLR~V 5 ^H\m@yzzzzz'H8mo%k t)XL0Ɖq~3e L.u2eX i;ulp5 eMtZ 8>P19HrS h!`R3 : D3Gp$LsU%a˙Q('$+LMuBωU߮Q":^BxdM-rHwb'0ڔyq9dB*twb9&8H(Gw9 m F?|7?Là=[Q }Q\hCs{? ]u%>#Ty6(gl<|n'Y}/tJj}iΤWAnpJ) X6}}s1~W!U4DgQoVK퉦fZAZ^ĮMڍ*rR͚lF1݇>$39YT"^~J8^FJdl+$POZT glYcv?e?i7͜dTX0D51*UtZRcpMbTLZF^ Xe!{1+MVʧDrp-rq #Ew?cf110,~?=8c$mNG1EO!3 )_E9׳@ KU+ؒQ cU'Y~+C OlAdS|X (j~*\^ێ7<})GER_]MKA;?W[SLeM |\k_8-@vBQ^{},̓"JA8s3o'aWTa,)vIX\ &2o}O7 7 7 7+^74܆^Hn9 a, aC qmieRqiGD-Z:DxYr JA:R$/KD]yQQtʼnP֢8Ql^*N5{:wpӵ^M݅h䒄{<^t`φ/Ĥ.v*˴ʤv*%>oT8G@!ʌDTK/!䅒ȪĔFT~*/?(|VQ)s!=<,E: a1pERo=IOY=rYUf33ՑQdZuSԈRHE:=4?RҜO?jkƳ8\W2JHC%1GVt{pow/?\qkw>}+s/2=`Y!XǺ):uh?%%"uih>@c՜MNk )*Y_6jJWuKFQ厧ec&[W$2o+#3ϩ!B,L*|8[}@ꐄ%H:k^@I{9ƨjǙ_bẽ^vn]5Vr)TDZrw BN:\{f0g("mPJX_XMg9@)%$aAsڻi4'+Y;64@מr ;^'%b;/+nf3DpeN&Jr EqfUKDXpcu%0`"RhK&{2qML}mN(_,y!Z.D@(=8k(?&mYr?4Q!yܨ{ϕ 2,)wI;Ȃ6sj@>*BkLXB!@\q#R>|vJE3|XJTPh[+шW9M$%7$ע6-k 4D[d'jJ~hR~ck-.ptA b$(E" 3\$:MӄHUxΒ=؜ <樊y\!U^0nEgS'GK"yVy C'.{,$ޣ;<>(yaʚt&{Rt3ƙ)"RޢֹGO/f+bz̖-#/% {ᑎXQs[}%~ie<iӮ4E Nُ*k6A{WvA.:SurE =3ӔuiuQRwEť4jA0508KNFGSr%S'{QJv꽘xTwnJ]t~gȌ 0UIQRcUEeiyxѵR5ObB1JK+elD LL.)Eo-I9*řFH9_qݢٺ]rpy.o,D8eJe<aG?CsGV[,GF~7!Rpv(I m[?%W ;gȶo~LJْ8WZ?hM(Ej3+opMZ`U]/J9h;st5&HE~gO&kSW)CS]`R\٠'â"l.#Cp~RG&Lh%Ow:SXgR.=b+s[X(H0tWRW2~9>xAFPT,(W>$3ue.ÃD Wyͦ[8{po5gB\bMYuʃgJ;Uq BVP K>P}H h֤J`㯐=/4&Z=hr6=ˈ'nc7!HX6A傕 Gp>KtjH!TYي Ku|rHj'RQ|/9?"RgPٿ3}~ It£t DNn޽ sٹ#:kxKʽq* \]:$L[P=m%MҰ yy¶'/\Ha.eWr/wU!ݾyF@uk< NcH wRi~?~]'?]<̎֏'wڳ W{!e*U$ ,IP a 5냇Cʀљ_ېq=4!e:MȔ[a6 !^2+M$aђRA[ޡS1LXr8{VAHIEJ}`1.!?DՊ*ߙBka A]`/,0O_> by]iP(TjۣPm{a`QLq2-eo br,Xl gr,EGUҏ?ԁ̞^h+tz~JQHQ>#Nh`:%V#Ϭ 3A5&B{]/,Z>VW}Z+:Qj)ͭ,NOG4G(#9e 2Yq-PjD>Ck$h p{ ?ǧ4_5~qH\#g׃Ǡ}A%E"{>q],߼D^!_+?on,n5 a cr9VjMu?3˖#q4 !Qǯ{{3RXO`&0/" Iv\ fXx絰 0ff5Gs!`ȥyEpB$O]FH!rOl ,siKhߔltvU ]wB&+GU0/W1ӘkBR92~lR,}2})1!X3Pq`waN|g I ApEc &i2JS"92:0( SCL:X!x>LySlE5"r·d54qF^T[O7Ù7.݀ 4 /#4FJT[1.o1Z2I*qf05aVr2yf.T!G#"^r`ŔX;UFN׎si2/&b:%,MU2;yRl=TRsu4KdMAK#X&:2Wj;>lh~|E~-fS?b|}=kzp?t w?ds;eu`qG<<AD$&?{~ 5!??N&釛!|B_L|t<?Np}HƊjEN͟1!X9O7TxS%С J`*(2jl?lsT8[ød/bBv+ :!XŤb0yp%%:2jבa,#%`P/wUr\9I EAkގ&2TNsnǀWݛ&jqS0ڌB\m˘0~]9rWpAZH $mK(ͣGZ_ɪb=E^ګ<2^U5.۷oX4jDXHŠ"RDC(0e* ~^GNlAUkb4 jЙ^ BD{dߪ|ZO  ˏ|OPpwvLXꉣ4$/8cI qEUSon/nY[yHKEԀJ|GI6!q<pBuQrAj8ƴv$k #9kHqAbd(#\wQlz'{3%wsmղWҾ u^Te"jSµ+$KI朔 7Y4z@9)gv%Q8TLwR0N]Y]_6OJFնp0*u 4384ë́ҥvhR§1V3uEUTt&l*gv[!W%#;430T=z8Ya[WfvFYefQU sJt mj>guh7ZЍ~VZg6(RZgH3tA̳QЍq10B&(瓃/H,(CȘQ9[O[q8bhSzE'!Cq[l/![v#ZtcԂi Tk%'<0b8nSĂr Di;Er(DHA:55~:paǽc<$˥ J"7͚^&1`EP*@9E=}$eF0ǚV^"Xu+ / ( fm .KTƈ?\)rNx.u(92{%߫yE/hڇk**·sYp޼EvB,*eYC|4ȯn?k;gh徠>\xPWG˳ ldkÿ_\?_}xy}ˌ2^}'ps.qix:? ve]z g&OZ@>?1>} Ռ rɹL.>2,G3Gb+?ebzrHjV4y\"abQ֚}OGD 9eTp $QA%d\WxWJ`0n N70!Sec9Ӟ[9nmʘ)q@N<#%. K 8;gH-1.8S )8#8v_lEwtq>Q=H<7:LLR9cF ,pqFzDJj-3"ex47 LS&-Ia>( Lܜ|E∲8"Z ' -F@Dk3B+ BDgCPoUA3 U <̬x'.¹s_,".xQ2m%攂4 'bR9oD0]׺INfHb%)\dijI/ OY.iN7c:pqP2]-U)T.VҌ*ڒ@ R= ,?SvZh?Q[)܍3CU a2; ` IddkI9i,: 8v)aڐܚIӐ0m@ ؂-P~)(mDC8s.r4e9j5I~[C.'x^ }b P 2\zN5:r)KWϵ$hUVnW^" {#QVa$g7.Q ݦ3\>"ի󤆾{W.xz8Yْsi??y2 TwA5J^u^/ލ-_>\q Jč*uߢymq}N:^7,2ظ_~hW.oW&&bAk{^<훐'eI8&5)Eͩ^j`W6XwŜe: D&HV$:7m33VѨynԊC?riXBp&-Y\er; XM-!]EAzS]XdzO_UPEFK/V慀XPʤHéK5\EfC9YTS&- DJX* U HRAb|/׾r`T[@+^F(Ih 02t׳%\K#XT2#<fD9%4![v<^" 't-Jŗw?ix"7_ݛKa7›Pfpl%hKU ̯jTZ.*֑y>sߒ\A],.}|rqy}'>!/y9}/7t[ 'M$<-:>c{x\6;I[IWHQ.I*mЪ`V[z+ ׉n߾y#5@LK]tRS6"VCБ< \~`qQsB>!}m!IReơ~d)K&%)!db&QIK#K 9EqZFb^ xR!cVіmFܱ# BG{.N' *lFMQЩG!tR^u4T-﷔UX'Ө9N۵2qsd:8G.Ir!\|P&Z`ӥ~]uA̻jk+~~ AX]%T PQ1DŽǭx_m} ۼd $)3˯~`>0:t"Yꂿ="g߃?(jFɳ? xKy8g}@r2$;.խ:_>3&}w=n^$ UY>tj*rqwWe?2_@r` 10iIa '1 5)`5!}xjT|f3=dа wv@倞%L Vo-cr:b.̷ NS;=#@wr5 d|e U!2uH^nAܢP-ym7lp)0eZղZrH6lDQ;:?x#J/Sx[VSĕ1N=h"VxW2:)PLWp]>ZeׇZEևY1T}Ι xlrnyO ʼ\q֨zh["] JU c Fꑍ. pmlU0ClQr4l㽯Ǔ1trm27_>g1+X(t<oF2_?gAc,uuVx3\ÏBZt1b'|;{.2Q۶26ʵfnx(9yDzaL3ZǑv VcNfNۜvr=0kkeqA*:wm'VoVӳsHU*lKXæ PV1Șl IS+@ۍg:A&kLanN}PAJmSCۅRFH%Iʒ E\NإGV1PT=˰=dP-kR %t50H;ܚcS-g!Z 13SG݂ع D DNMãc~=bDP6uP2WGXZM'$gLHx:!3?{Wq / ANb42kC^X9P=xE6xT3uMYPŪ/"##2.B1R(i^1"D"7Zq>XIcEKw3ehZFik0aPzKΧ6eQ6{t֐$Mt nq_Fr:YUy,y$,JGKT׳n":djhŤ"&p229ޘES:[>haXHo`㱎"xK-hؕ2rn7#Mc$NׅgP+*q^L$KbV y}2yQCu= =6#U,΂ DP/( 68g:Ȗ|s&wĖ,u7qdRgڏ&]`TWc/ m1?ؘYۍZ S Q_xE8D]0W튯 %mc} jMP^+I*8gw.L$IDSyÓ/> n,*U తxR\tڐk%Ab9$:=fTN7VcƄ@t N>,-{O1*PXBXR5|Y#iX35ֻL9߂P3ΟQ}֪ǝŚACbfF*A“1" 1 f*:$xsPt%~9C{X ȿm_~ZY\xQuN1ps7~>xR~z'[^%lZTUgDΦurWX|x~Y=>H|b4WJ9Z.W.~~:: 2kxh&H,NǶ ɻXts`A{ØGj q9619EXqfYrmX0 yA({bZLEt6Ea P߫:ܟtv{ k=&)$R)9f kLD.E$40`4J0`#&!Ҁ">F'NJYBG*FXo p7]wN|<\^!tjNE&h[lM]5x >;{sGG$jv5v,O赭b 2aP!`\ -|w΁ϯ6埾kU)\*V_zu4-JB[Y=&@##K W]1e4R"`HҠ Jd d]jAF@z9:ӺlB4FafDDv$ҁ.gP& IUũUF&p arxzZEAJxpBa pJDN#PD'c ,(#A` LcD'mi\ n '15x$a]!fq'8giuP*g %ԞSӒ) tGZ͋YR",pFw_Յ#?-bm|5ofSf7MY=xO#;; 2)OnK{a~X;9_*]Q#Ps}wggYWOwwff~~vO77p}"pmI9.ۣwDqv"@\~3 ܛ>{SIlB)7d撜a)8N eZe?(^"6-㱧 kQ҈0l`͙]fq@fZՖGtTQ XE}f,ιCUf1|y:zLE ZQ_?PNEH9X2 X6| oOSe</3ߦ/#3 :1=󴯗(BIX O.2ԀrrSƽg{t|c3́IQG&*'|Jb #88/u63X;୰zxW`mԄd{lv`;#YyX}=>xAfX" t{2,fXi~E3[];NlAOa0[J4>أw?W_; ;A[ [/_ܗ~чj.SNR G8, q)}^R̃@# gÐ7!xBPظ(2 RRQM;0u0T$j]SW05f Xq{`mPj׍ql @ݱ*xT_YQmUU]8nЖ2* `Li~07@Idk@w(#(= M5 gj]5QBiY.[SW^ uM%%H Wq\ԕwO%Fd}ړF1^P ^HQĆEe 㞛ԾK -+,SVXZnPCqIm>,]o; IcF~]nC4,Iw&g;ϐa/h ;KY ènc _4XR$u~rKx>lk)NM7/ŷS'~tZx\*g*}*wi ZWNr?o>S(vW1(CL3Γ=GL[]aLz8ֶC' kϔ+Ou,Q=K̸hk+x,[%?d(FXq癐dPlF [/ɇJXp>ίJ xiE,˃]r" Ju5ggi? vwQ9 CͰgOLH#LҶ!Hw\-_JÈerZd[քbg9x`#6.4σVJ #WS2)UR̪?/ iҺXf`CߢQ!E:32 _6!/? )1Sb2CWwpqtHYL (:DQ@Bp D9KPpdpL|,2ޢ}-b ~آގ>}0ۢxh%lWxL}$*Iʔĝ[%|/`9>"j)~oWGSE~Cm\7cA92Xs5Bh:= ͌&2bq0m!zvA\ p-#Jd@XEcL5@8L9Dq-m3=[kǵZb"h}, IZ,` %c8)f(E6+f%ݔS!ީ r#'eCZ$o3..RTn?q.F̐H^F<7G4./O7T*׼>69 -ć9|IWQqٵ֥Qu{_X0lG.%4*x5!IN{ZˇJ۞P Ip[|0^8+w&/c^o {2=2n qutHmv,mv7xƠ!״1vuGc}!#-$zZ(QW#=cA!J F#%8R޽ HyI"C|r9BJw mݥw]m)v+"3 ԪXˋt%پPd1_ #@M*՚.Qnw;q0ʸ20C^"Auz xl6Js񞞊@ȵ݁lq[޾Ukz[/*iKmMNu)GM>>|ǚfkX1`Ií?%y;Nr,NY" ~jb{w}S|C<=iU*r:6;{"0wZf=='mvdp4=wb28G;i=y/1i9~$(Ęv2k3m`0&y̛y0O'h|IPO 9NEpTO_|v^|x @|ǡ9Y94oyLzgxNe妩$kDs.f?gn׎_aLfzіo[Ի~ :/ټ2<ŏ}ldm̎7-LŔ24֑R dFsd<+bBto=:$"42ȃȗQ{l 7QO Ol*B7ʘn0mlx^JsP20g4L1']dJĤr!:Қ0CN !z"E)Ԁ՝\2i etd-c[(p%%yc@eal CuZ"1=vdb:vr 2}2/ Ȝ^l5yMnŁ[͠Of?s}:ݩeYp0J)\~~J+ S4/ZqdFO NhܢdwRZ׫t™K >" ?VRd!V ~т6t)Wxr+kx9omټG{bp!uǏg'^ $cBµY]t229pV ol$68*ѨOw2[;ZA 8$W疵Q"QW\dĀp% OhDxXI8WB qTzbЪmz1ť@V~-.ZhL~1M/J1wnG[sټ4v+?>n}HW.eJqi!\ލGnp`*퓬VZih5{2(&X^C, Iχ=n LN4>jVU4:6 b,Oo1(ɤF^B}+YoѨq]!{9I_a: Dm !%~H RfbXfww9NĈ'!!r T` 0f}*|5CJi>?pE_KM&zm,}™e:)} v /~ygՄQ4UL8jmI& SI:\v Wo_$[Z):e^˄ӁjIzL)%UۏDl_R#O%7u>"B arZET6-6u7H[;*҈9M+̋<7j ^ }X2;x!1 8g*QSw[-`Y ȝ> !G`Y]tl`T}?y|i=XWCtEEU*ֻS}jž '{ Ub՜΁y)=hW}~5k"DJA'e*o#Uy̡ksb҄V9ۺY #gD}ࣕ]/FWxjWwغA($^b)'=X;~RZ,ҤYPd&P a|n(3q yV.b[t3q?LjO6OJKyPtv,ǹYZ ̏i+2Af8ߓ<}~1Ri=;اGæ43WQ!Z5 ֋뻋Q?Lg |xs3|T^_sU^ DRlCkv11ٷ"/փVx  B" V'+Mlx @|m<a)_ mD+i T}( n0[c4b1#&Ŋ[%`-c8z f2bcɪ|@dH?ZER]r;:O֎o՟ J˗O\ԼiImWZyWçz[i~g*Lu #FJmhVrY֩8kU-r0npTE X k!kIx2  zt 7ۻIo7ϊeSqvq6lvL`[%zs:U@cs3.7["2I롺J@9> Jԟ&suvKy#) 8TT"[1t^e:f68kJx!1BYr)8s/ *iuJ9: 3_s8EV,\N*d\)"QTG%c(%!` E;ŗ^IbW[hZTYQ@1m|s"Iέ JlIwު֯G^3bX17k }P,_JuSgB_ *1嗂KHy:T)X1ueN""^5жۤN1oS[T(>yV#~qo0eI`~F73m(qm+_t rcY~O5e! *% c`KD腉`#4p(sp5Yf!$*/CI:t6qPŀ]6O]?n޼7 w1lE֢TLùedTG|Г0t-gR_f?͑$fLc v%Ppf&8SDGvǮ(֪hAmJPx3rZʠ#'yj'r2ƃ^Vl~C؁*6hbZF04aJȄ bb(Q%(UrJ[Rw XK.H{^YGk RrDZ=k/ !c-kniGSh?[Lef]dN@H\Irs5d٦-EcRIQ@׍Fw Y. p)Oܤ>ZI 5ayBg-c"WҳeR_cD1j|io%򵀜S\rDALN}zȗv4IKz85L|m8)=l.8+6V s+*݅ʗ7>[/eN{ɾބטy")`0C?uOy+Μjzi5J'­*J&icT fDmRDwccFHo_}\+$1r28 wk '!cQC.v[N;1qpiDxwi1J1W7c>&cztpo ;fMFOS$*giڠ]Ÿn֥#O^vY|r>] ߿q 1w{Ethmmofͅ 5ʑFS†G1I%c Zdqz [0Fن<: vēZy]T IBR5 _#JdE1dՍ* H;We.La+S01n'gB >"֍7Wu^}VVw$[5s9ΛbYZ 5c0u-Fkq-uFUpLlr.ZK۲Uc,V)qzݔp&|Υm>-s{w)ϟ:$< Uw*O4(buܜjt!X0J ŪhzހF;wRY V.+?nX˕N?2//](-cܸ.KhYcKMZ42hς[007c ]Ar:svxdG'%qB/f[f3K'2$Q5)a9szo# wtc~h?0bѭWRA)8Cq-'=͙֪\{Otde'L^R}%r"ssSFсQ0;)#ڛ٩2:N٦, i#H{eBzNbo |d}qo[_`hiTR1EX ْ+ CJ滤ph]UiJp-&Y31zi뽋{{LWZP&M!*G F) E&u:uN̹{'TDEi%pzM4ig+,$hd1 VU"@ ՞m`(a^ d#m`Kd8Ce„Rs+ QMfE"bNጏ<~xAyndT([r&XQUUʪ < RZ@ڇv֭WmՆs.TK!;/&)Vt7_ҒRVV),q8PW fm 4tH󣷙K' f4g<'Kd.pҌ#NP "'ZbFi&gB *7F%yiϛRZ+A]écj9ӝs ).kΤia5_UeS["o3qEDߔfZJǟytD: Ds6z @y:@@۠,*'mE5 ~D&`pԪb1KT8>v , \[ckQ̔yl\>dls$q#_@r"ϴ6+ eDTqSE!teePזֈM}ʘg1"jdvE6VO> R Bj2v7wӸלƽ45~w΂.9'(ud\b$U԰JiEu#4\׉:ݝ:b+ TKZ%ItVI=$m2®V/O %&w˴FϽ|_L&HubJ $J[ *D 7`@s^InM u/w~=YV$/JHst`JZaNhx4$Z8p/J+diIIsZ֤xU)hu!Oi,~龝Ƣ ѯ{UyzV:?_#~wbKq'ӏwlHB1\~ן ^\Cɂ ٯxbڀͺZwܠэ۳ẅ%$fvƓ=soyx]mpNJ/4^Dz R<9Pj yOtoa<>\+ζ> DL܊_`;YYri-ZM^8 b1pF%շBmDZ Z5yFvuyQA!?9]:GO(cFNY8} 5&SރZDZ͜rBjtMFEsy.ʈ_(}vEԟt|j3/~yszw Yu{u_u ߾yCɜ%s*ۨ||}լ—@-g&כe\Q^nm5.T.jh= "2ulj* ku529o%?l+\5${M n᱗Tݤ oQ~zh\["YX\]̮ʨZO<[ԆpɃԓߗsΠ+E VSY@YckAoj,J1*LD+$? E`pCG0 ltd?P&d0% r)\J{%ib~ɤN|tvzwcAt$-ȦBe;)C8!t49K*(ͩװÊ\:$Hj`aU[-D}/*]Ԓ @r^/0.b)BNjZ}N߹@)=)xV@*s48$̼ϛOK!%{7R8t0O?K[¹/V%{_ÛT;ͱk @I+gYؑgp tEaw0+;hp ٺ0>??lT Gv&NiA^DAZK }?ˌjE˺{VBxHa^dȳG8vmNQ0rY"m(zP)|$<)Z,1LDY dr'o%UZA*`42-3R8Łk!'R(K=?b#/)`!U(-l4R)K{0k{ >""-F)M&!*=;$!" ZZE!,QK#. 1˅!D Yg:+|Yg1T b(10ehqHDu ?#,P͜(P͜VB97%mg.uk)*]Hs1JЖG*9E9AWx?Z:0Os'~:f#@g!{CY z>c!MϹ39e`Z}!Ăc.psDzHe=+#F+@;["vy0&B LGD߼ ky;R)| ]1̆0g0t%)H\O' b¥9t+ؕ,fUVp`t 9+ڲq ܒ *JԹa6;%.ƔVvNJα.rbPye4( 8JN$J_ ;pv/AO `'N.Y{ir>jAD5cJ *"O5S11Z#5ă@c &wb7|3%(0y"Qy̖kV5.x/yu4p.I5 16NEcD \ B )`/Lv܌$[oE#x0k8WDXEZya*.5#R^qGϨ:𣭪Z;21L sfJҜ:l\i ,Hp,# "J{G1X-m&=)曩0NDܬ+DbTMʐbO+UV 1#(e6B%. GB0k0A`,0ݲw`m~w܆W9try'˻#0A=;BA˝::barpգ{gf5Oq9EMnIϫ9QЪNS&ta nO6׋ `x4{>]xkVȃDlH=;#5uOI'pymjo dEfo19moP7nʼY+6½YO4c@SYo{{ \̪#T"lRu~EWi[6i(Xt´-  еYea1PV]6wג/ 7{j2($hs'%ѽd QMdnFpbB7atmT`=ĸ|,O׫ 'E?>\.cqJpqt(BԦb[/P 6Vņ^]$\[Ю\E'"wuY+U/:yT N0%BOwxʔpͦQ@R>CR ꑀSG=RDVWÁ h4FT4- ZVjy:RV[x×{B)/C H|+(`7zs-y̝5_^t&kGh6~i%$ K/C:HkpBwh;b6 OS޴zbOxR D0N.Hju1[2o=3Htuy ʮnj1E1Ǿ|I?>2Ǩ!T _Y@sUɇO/sD~kT}S3h6o6i;?^vL͗E0fo__oFj2.IOPb7 !$bzgju8ߕaSu C-`ҝO%%/Ǽ3fX1=x޸5>={%E;ˏqIDQuYWj=Em&#/BR1f}Lk X`Z:Sp@ctŰo9½B]G˹Ǵ&~ZaR,`BXHfaZ- u\%_oFj2ںO(8OEyѽkj2٫R=CCku B!ھ:ueoy}6!Y7"O"{u9#R"zb<ah> δR+ B""E>MG]0T**:Bs#rjQ[k|z˗p@H <0K43J{F5N $V {^+2`:J= pJV|u~ŪER7#{qꮗL6jU)9.0`qLO/E-S^S4Qۢ:F*eIH'AQG$Ca0H fZtd.gqB48dP bEbknf:QN~aQm;z6wڤ}\h!Jȇߐ7HS$5]?W0 (Z)Ӈ>#?hן^L|yZo7E\#]L&xinWt9u3(EG !,--baUQi=:bdrYO?'?ުAt.J#], ͯo~[: }Cc 9֒)_5 U".Uf|ALߠ4 n@RMie5ې?Z l0H7k=Co `gll~k|j)w=HF U6ʁ ЧgrVs$jg7ԷkGtpz7/u  )ipb요?y}p/Kika_D׸!Ҳ\Ǿm/?nz8\ݬNAsa\zjY-:QC[2UۆvNi,_ecƙ ԝ~<릒i{&zvcĔrc{؅AI cĐk;ORhzMLxMFH&#$ >8S +NH@-ȸ`޹8bS:GQF%Ǧ58GH}XMF[)Z58xT2B =K$J(9&U%U THy {UQ ä{Nurϫ;ӻNWu[zE6htT 9cWOԈތi-°Rʱ]L j g S| &D-VP)Fa"t8"HkD^bnaY4´YJvpa9Wg$-Ĩsv`bs5A氃dq$jgCSM8 Ke!;duW49I]Ib=\Z줹>-&5PZ2˹v}[sj/r"y%̮&ؼ A1jvqqNx),M;\/8rG+XQ nYC s̓H`A@P,8Z")'Z r$U*ԆYT`7#@k\ vQ>fFU~LD]_QCqk!j4LfSY?XU2Qfu ^k^,Cg-ruITGJW|>:۴^>ɓb X]XΆduzbdj)-2ֲzxFU t\MT*%¤ecW!6-F_@(ąh[O=.+E'b,ɟ x?kyYy[_oժ~@𬛷^^]xԆUVǿ¿`VY!lf" ."ׅy0Fc14X1l0B<3~b=XrynAw&0y>AWÅIQK$6bgcLwwͱhlQ'0 G7 )8li!hsDI?}@- -@;ALkmXEЇ֒~19ucNZM`FtD)OΒDU[%9\~ص[bxŇkq31(u; zGrm?/0(>wxs S$~v̽h -nҌ> _Z;95>w[(y&9dБ8~$1_" yfoY0ʗZ0€}@LG|(9XGbB Lp!ψ!s.a{5]1;TT0qT'J=C0Ҕq4]@BhID6sFˮk~v4& Q6yx$2(T:Zk{^6gSbms74ωEҺc Ï8}B%!|+#2\#Nl7[&]!K{w'N0dp\̇f= b˻;wǿ`JY_P^ F}L>08r>l6?zNqm3?O(Rͻy%@wH3zp6B[.\n (]/\RvuqyK0xI EeN$KNܱ_2nAS& Ԗ-P,(6Nތua XgV,vދ;^daSj/,%AF^3f'c@g HQ mV<\MsZU?ϸ h<~J U*-b2Y # jIe Da)ny=@,2Qd@g>|JקiS+4`RЯ<0!҄ Vog9\xmf]rQ`G}̵\nG-I¿O}k3?|:g0j-iO3ݱL1RhkXE&vDI;0qw:'t;'367߽>ӂͲ^Ү?߹s}PsY]P.x/vt mzEoұRD~tH?"J Y-[;G%pp2uۏ/7_=s8ñ _?C'ށxx Xr0=:OgpEW셯>U3uN~wss7 h0@0#tR{ðH4S03pipMpv>{:<1wQʢG뿔؃g;a3q1W$&RƘA%t~ 0޽73ӻ(zo DpLo>ΖQKx8o@ 퀥j0[GZę8`7ȝ}}აxz}o=zs1|{cӳhƽ^ɣaTގG! KLg&/[NCX Gz='WϋX}40M tF?2y1O9G}>= ݟ'Ȏ)c1;lAرѡIz=8V@3BK*Cܙ }9pr,"O_a7F0ݵ;{TH3'Ar^5fl# rZ-aT6 _Z+Me m%Ҳ:aтċm\w[Nǿo RGgre9$I HAy51^̓OhtƸfkSw_\ 7Q|}K ~t~jmmd& 9uFvVފ@'h*nِ][ECv󪹵PʨkgËŤrH!P d+6;QsٷֲyFB3aܴJȕܼ7t FlcR6IM[[Ƶ˕T-Z$-cڮv674BӉ1v5˫&&!U1it׭Rp:Nc^0]:Y|r[)U&DQ.aoflӀJ|yͺVJ52 G8NbJ`pV9HFH!Ў+;ՔT|#hTcV&*7K>4[zT"`E,ή4є&t@!h1f+\[_:t[:3;xef' 7(/Fo`bkZB|I񥄾l`3!)Q!(O򓾁ilȳS1 'K(!xzlǖ hPԗwlѦVJQ*+Z3=). 9I* T0' Z Yzq0#7gLQV1TH]K}\d1ڎ+0Jr#Tkn3'"v|\;NO2a;Zψt1YF&7p|GLoi+Q檍٤"N؃2ZKx4 oD]}!i[ .g|5Hs$%p脪\txХHXxj[1T߰`^0YVU6Ye{ 5ʿ/L ϻg2[ 5KU(g{V1Z7^S1d\v!gm1ĊFkWZ$P x6LBKgmЊmglښBYJrAZBqtauNҙO:6liOvlr.jgv{]V&_mywvfҾacN``J/ab}%9xr H -{1FLJ&Dc#]ld\% j --Wj -Ә DsAI0` G.@&(q}5D„07;TG:9iWru!{!ə \ŽDf %k$Q{V O| S$6T6Rn95 E B\IBKS7yVT 㧤(qwa]q7ҒFLL WFܨb7>],*Y7nT saLW S[.t\+lmȻu?=q!N-mI?w"F:Z%8A)LmSr\ :5dK[&]!KϿ I ,Y/"][Am*ݝ_09`1 gJxkKK[$O!&58r>l6=J yA#*83c,+(ޱe3F[+`)d}Pe$0Pn#($Y#)$cK/X]N48*V%"H&H:8jm#0HԮquP-d *"]1ow)Fuaݩf @]D*X 厼+;Պ [opQ-Rd86lb:r6#I,P7;vtzܝND )Zo-I7SZnE12t#tkTPQt+Q1B~!ZhLMvHd:ߑK5G„I®%Y[ #;)*$($zCblIHq,0ޒ#<"˗UѴ"F-h&teouCeeiootalwl}ȅH;'@8a8Ej&} RbZ9ِZdjdn䯳geEBp#|)/:i6d?+)JTb)xzlǖ hpc5ܓJd w ƙS""GуxeI9>!|2yc?U '][R=Z`G0^z9Qz>3]1i?j-w r9OmAio =sz.H7k&kvyˉD4 ݓ5v*<TJy;g>{ש]ݔ ZP2{I&.lyGd}!F&mpQ6MXcy1u{: #.O:Sr{t;` ?ۚPkYH~6,7U[]M+Bdَz+P_A@W )@b'dT* 1X!eCnNAR~%gSn xpzsXm-hK Xdҵa)l486f>錧賥=iرe3r:Mq3=Շ&SBݫ;O XMB,#Y䪞4Y&ʆ4I1T T.CgLqOX9,c1J#(+Y|yFxX]|'P V'v=*L@;(w/-h틠EJ hAKWfh^+ ZB$x >2՜e rH^ʲ K#p…l&Zi3%q2XaΑ`e4eh{T|hAU|*Vh pX<>~EXfFxyxȺϧsQ?y@S:G.zLm\pj/``g MNwHI?#E֜ YF"~߀)\HFxeSdnLR7-1CI9ҩj 7Oռ8w_̕i n^,*.[ՄR;”OEVj]mTv{9,?rUZYPW \x0Pƽ9tZU)gOb;́.IC`jHey<;Û?fnXafQ UW"(ZhURA`5 () pŴ;HGQG.Z:H͚) R}q3F}G;3%ՈKNۜt1JCtA,y)C}4A$Lb9RfxfA45ޛ zĹ5^Y%y;%A^c;ڈѝ>y1#,?XZ sܜQ;{W tyY\#dg>x]#ƨnz)45B{HjRwŧYPj_rɣLvwޛ %s ޝ]}On߳>tgWm8*ŕ;dF޼sL.$qOĥ{vΩ(^K+g f6\ !.3g{Y\LiI}o|kp:}z?)&Zdvug({zzڂp'+hOOH "V\!y])sƖ/ctT\TӫF*6ޑm Ӈ4=ɜ;JpivXuoQ'vWlX9?䷀No\_σ}z4︌ӆt%uѧmApr C1db9Gnn/׈f+)>3G)o{*8i[=,cU!Fì* [>ląfd{-slO8bB՛m7 A3%:% 5DfS/B,?v: u=[hDp*o N,h؅m`+8G(d t#@£E%Km\]kPDC-SzwjŖ+{ ɠmپ=T^d1hdUr%Skz׶i&tUFs{L|):Qh "|@+VgT-x_⼯"6X)c+Fi.>P egq ЉI\F8tmp0@.*`m}t+Lzˮvp {qMuko7Rq0}c th.V]/>XEiqur݂\&n9 V+E_^ 5-MnVRB)^ LjZDrƠt%0Vt̃iW|fo Ƣh3]iTigqN؎,LNNF-ENk G wB_hlŞ-PA HH\!HW% PB2:+^ \Z9coZ dye_CK/,C`  "*QYNYsq+jh 6v:/ ˼?"jXS97>Y/6̱'0 1j?#KocOTz3xRl܃ ٹZxSVZ+,fO[x!JnT+!N9 v(cci˯EY"$jeHV1]RPetX]Y^_F`Uc]Bޕ,?Y/[M0:hY BEb ;# d$/@ vx,,kNoY|eoˋ>Ik23ߩS94RM^U%;W ;6̱ߛ0#5h x˵UXy]/8Ckx:~'Fo=J~'ϵ=xocn!:E]k҃υz[IJ"qGƄ4dsAE!` }A`=~j2=x'GU(UrT #ܥ5DYV@\u!N Ь|wVFZ87Oc5SO[du4rJn2/ rWg7/qB!~dZUoZ{ۖ36;!/`lz'?\Yqӎ[74nDF\oBZw"_$˻ r0a K(<Q(h Lirv5(>dRV(fNG6񲘆;׬if67[`宒IR{\of°]N>d#ŭ/^M(zSQGJqzѠ<9]hEjs7qOﹶ? X)r '7/Ocy~2ӤIfg2~q61p~ɓ=Iy')O$ɞdG}rQ(``€e@V"z9bITzSJ)Ry+z%VyG쏣VY5Ê\oG2q:v_8o+Zanb4&4aɟs=Fhke=|PqowPf@gM/*O:$>ԇpRO}ˮV*DPi?{F[[@Ŝ~ ^{;O]I\2-od{AZn5 w7CY@/oܼ>ai\MӞ_$tdqbR7CݧywuvF$U^,-9sƓ]sGrWXryJb)WWɝU}Tؕ#  JH,1],$.;=LOV`}x#MFK%YͿS5e2>Pqfpj2Gz ГfkFjW;<qxe $G,2&!ʚ˫ vX:p%Q/}*x;<_lO,a$Hs [ݢwѝ)&ABĠC8NBٽ?)[\sEQAE_ytmʛxcDo8 65rjr KYlx3EcϏQ>?FXzrTk%8RZ^@㎔zVP@`']rH@Y"oOh>svGΞ3bxEQ#t3Z"4`\FH|iF9#. XM4:nPyCEspjVĈ;rqK&J$w\<(-(fZ 1LgEP(#1#f|̝DR3Nhl&,R* \xV+(v<^ 3/4mW~:gZasyt5v$lSPZĵAآ ` * j  SIq9\Kt09-1u0j| 4aۀgvA[?n4}{ur﫩x%UBw"m\M NrR$8ʑ`)5gk>D'ςV`(N-l3X3&V҆E '`s* 9FS!c  Zש)Eu-z%B᥈\h80doA{h9H'tOt*t':(\2]!8aMqbVz@6Tx遘:rأŰ~W- UO OT>}.$`ߡl[rW}9PzĎg``JM:$?7LK\")3!WрW>Q+j\PZ&m&cʎ1|F1"QĔ" hGȰ{yXpAZ/땜s%hmvE#KbUJX!(2i TBW"[9u$>h^h[A D/O?9[x™Ckj+dcIB/e1uz1%Z#lİO0 U˥B&[AY2,qvާ1\lXҘAoYtVrɄ0ik4Ie |J2A4SFpA' #T*Sԓ [s* }M"!`=rnKc͐3 83~>zK'׽iV 2~ h֨^VLc_n%\2DrCiMqI )엻|e |P̗p_%*31Z7Ӌ?muz=Spfom@"{`J%͞.d|/dgT(Z#^&֟u_brrSFظm_`Σ' CU20*90+ٚxB"5X7oǞo{?1GLqdO}HOwJAaWLOoxǁwR%8)Lv=%_B :E_-; BN')kuS|I'C, ˋ(Fh S2mC2 ڲNt!>@ynQ6r6Q3Jsew!G W8Ps@T<fV/X;jN뻶GKXhi^xcy;T|(nѐ>#ݵ 8jN zjS(cj3{O5Qû?}!ďP+#~3T9%1oWuV{aO{|r~훋?3 Ǜ$1/LۥzIc,)6<̊EGi%f=S1ܙZ07l,Z`lfHKYo~*j\ u$!_Ȕe Zh7C`-v;<^[4ڭ E4KjǠ[jοWʔg}w%:$x5U*cnZiCA2E&P 񓍻{X*uU(jv\.ןnSdAVu wQsx8?=or'!j&ɟsh̚1մǷjDz&$. t' ]^^mAZO!V3?L6){(wf9'ь;Wv+p7)YL"Լ2Ll"'KK\ߒ@KcTP=bߡ%҆V_}?1  QwGe(ƞ~U`,ʪX = IU6Ҕ 3/ɅOf2*؂@UrkO.\F#SQRNF>19v^cʌ[ԘwԘT,TS"5x1ARZlP8r~Ù^g/Jfb/]8h8TU[}ZJEɞ1⪔w⪴wd޽ALڗOlnZ •Mz8k_ѕמQ&s 3bMp%;oDpդ^rc}5.~_KE˘$B8m1ہ/I/Ǭ/eQ2Y*-BӢ,¡N^s!> sHNEj9E I-j焐&H*l. i`Jhyƅ!jPzp 8'%#q]̮@htqdYZKl)n )󰃭]kL pJNYEBXl?zƚU̡ƚtRwAȷ46<Cwn2:e%5Bkj06KWNj 粵ߏ' UfNUJRE[h#.:x kaMAׯI*ܕ+T έ쀭$XJcK ox 0ֺ \VbpBDw[rΩPpewbdeTo*yg3 0;ae'Frb%/͜_=R9#*F(p!ʠ<3Z5-1Q"u3)4v~NLO$v#)! 4YkšL5H5`RM.WRS:ؖ[ b/&{xڲNyGm:-6+J2Pn% yF hIG&"%cmNΤi*A6`Ȧxӗ|и|zp6Ac=:io2 p*C^ yϪƎDvea\ʖL(Drk-z[,9i%) @I %^9JJ(5u[9\S0Z i+>T##sXFnB.4bQC2T,G19-SZF-^36Gzń:P PđX{hG.3x^kHX8v"y/NJ>p]Ӯ PNf /rͅ/˞)6Tє'3[3I=)jsKbOg:J&[&\LiZD=x:QoPn0DggЛnH\ R2BNPp+yXWc[\äTCb66}- dž>9ZL~fZȱ6aO¼ qkڠfQrޤ\V.?mp&vڗw?enHwFҶlN;MWۘVRї EcbG+<1n*@ ۟ܯr'liERo/!PDyDZ- fb'@zOKJcy ,HBhR(Z k*QEOڞXXҝ)em(%KZ(O/W7`=QF='<%8% e*b>Ǜe1{|µxLUS2 Uy91v?u:FSUPMJTSj)y+>ނR P+:짐١2- 3:So8#=-sɜDz Va)bCb,1R\cd$^(⃊+]mo#7+-d'^$va؉Gv$93"zےZbݭx3ZSbzXhJ*-WeAw307>deRf"u*$ĸuJ6}l#e%y0DI`[ iP!yzDwZ [ALTl[X%]h;h_o99E#F~YAK ﵆3#Ւ O+*S bE@88eyw{|"zl:+5Zl-͔~nSj (JD'\iΠ?݉D6,䕛6e⍓ÚZ`sfꆕ:]=Zh AO|&@(}} tw/3Tx:ZZ&lFRʏ+Iil$IPUVn6Zw!Jrg]Zl vvRtuM\wyęRlOo.Rqo;˧W~Z*Oz8ut)F; L5#fyZUn<ֻ7ߧ'Uoy, G8oB^.za  ثv{[xi/uߜ]Njo,WP㶠Glw˧MۿL` głn.3]> ?:3x|OIu$xrz{?=?\ |'x]y=lNV$ ?4%͗rrrrRm":0孢Z]]O'N-mvT<Ej.io贈h`}|][YFIh\'dSM`_8#z)2=[ j͹86/[JTJm]/M?U@\yѧAܲR<99W*GT'FPGQr}_qRwF,c=fi5cv\NEBrpZ$f-I{9@59+oI";AbIg6cN%#H@ң$k0䆱s>0dŗL4fUSuNV9=upShWMq'WS,]hF0~6U?'JclA%wg9 )mʾ3 C{ZZvg^RA{d@dWPRy69)ZQ7fUA Rȑڀ9!Pro$S..a/W VI,P렔\R,r3( -#N*e fa:'vhfFzG(eZ{$L`J)F?0\ ꭡ 5hBq_86]#m)`XjOuBHF3R,(RQR 񀑚9#ED9RYOIp|H÷v =@̙04 W9K-0426Gjq׽fek i8g#3NC;DP-sn) d cw]WK^9iGn_5һM9;6xnuJ+]x$xa!DlJFr-mw;*u [M4˦=AZ/yݑA`,Tp -re/~LTGBˌ7N;^:f\}(!F(I?*?@ BiRHb6xV2ݯ#at?}Y=d܆ .v~su;}sw[Y5&%,fvVeΧ|m" BI1ւךXϸ|;TVmEZkG2R6ݦX *b5@3^IXV dC-OWQ8nZ @ `՟6量c- s[L tb9"=I ל6Pemk1$l,xFq~kŷf\.R`fYDžP0g̳:+)gh58," [IAލaCHJ7{ꘔ8$|ezhj8:h:%ё~0LxdM@DŰ7+?È #f+3bl>M=QD3 : ( C!ݲar_~)~ _`DUNr.:# ZPyzTU/@aWYVץ0ŠIMRD,Z: 1IӅF `sBuY(q^(ӄCP82"D.!8 NQvj eYl)mNQ+PA 89). -HK<ӀKpI V@Q'9 YP cop  {A E۬w1@ut _3Q 3J}H66$d|p')"L/$7Kh)rGq.b}߬b p8pb?x~oj_pcS.6~/VD[@~KNjF]W͝Ϳ^ ܯq2磀;dsMD=zOZ#b˛o_iK D2 v{BG @16\KEۡ[I il0O8x)WK7E(!=}S2+wU(柣*(pIK`Wsȿ?gKF"7{9-qN:oy"+"+drz{?&RJA!PX@MlI [naa+?{ ߒߎREƋQnCoGBX&7Jb,uٍo$Z.DL4e9x ˆˆR +9B1iΔ%jii58M4(87,bEXyb  ó@·)T׎E`EO&x aM)s낏%>b.G@8A^JʔffG]/*qqMrh(-mz_P*;o$aZ vIZR) +כ)L=}OK#LkOqcVε.![ݤ$]Ɛ!\ 蒲Dq">5y'/ Fh0q+2'%=W[#"7qI1^\:=NO7-~^6'?80x9Og㇛PJa𱶁%nl{ԿͧӕP|y9q~wߎ/A&8=ݰwV"b\NFt֠8ZWd]U6+p"h5=W#Sc4;ӱ<ĂBB$C1reRX^(Ekea cWޗ$( %u߃!CiQFw^2C+g<-*ghP9\9*Z,r1RJIQqK%h8lI4-'8>E#l=K~x{7uтꋑS @.SZIɉ:vA,$E>9;ooPUb1pyE)}?Vgo#..C-8?1k rT8@KkдZq\"jwCaD'=&S@d`XPPZ-N+mb)%JXFe«%յ)ٵ^6}XY8j*wM룆 ~Kџ\uS"9w3"nG᝗nlBĥB2}mesܸŢtsَݏ{7m֋q2Kcq6s]2{HE0&r2,g-ЂF;KDt^~l@.!B䬒Qf 4@qB/"p j#mIeXU{*ӽkq-U7(P=lxYYdCpQ$Ӌ (6}o пshd3VE%=4$YP UW0"yA"" CUP& lJ|2mS=~\ܮ!HM9wAJ%S3V8}kEe)7i)x!cJ g3rK#ꬓB8FsppP xT8aQ.rUsIQӅ5UCI^P|RgZ ꜢJU2n,2gaYIK @URR/ P˰ ;yےZg1YDPeYx/JWO٣#2?^]LewR0V?wnFßcdcN, ,Gn<*oݽ-@`&Ss,&%3S,}zYYܱ~㞥<(*'e-Y+7,Rtsw*i6n$Nf⮨SnMn-X+7bD>n2ϟлbb:m4n'\ENͻew4ջa!D7lJ%ڔu[Y^H%ԗkUyGN~;rRPsG$@,GP:j"g"3i<1h͹T©ӈR$T:~Եn2`g?ϱԎp5}'-u:lJbE𼒝z#Z{'RRfuR S(.PwH_[],wY${`c{-y6"[z-[M?(vUSSUIKZ H6QnAS~ő"dn,6f)/y~ ˼Oǫ+دbutnmNQo2bvGl98wONjҌa|CǺ{tT _ὲ* :֟'o$01LO0 Ǽb{bV AQ9h br*u  \넃uJtdo]踤V 5BK0ֆ,t^0oDSDp^,TrH,DKuӋFf;Zu"Vwsԗo 5Ȓ W+^abr LoeȓMR,#lC;P\[V8$ rXA `zMr`#4Lmƚ3)aՄ#0(J0jtm RQk`=凞Tۧ!q] kl,p [6H/qJi!ƕD]@`>˯U dR HRSjhĜ@k V@q``PM0CV57[3Ï Ń0BL@ӍF3l~"vP$Z .)Տ.q4Sk#q671W^G- AVɀYVnݯok< |l= *lʂJm*n(v,8^Ktd@ >&[n~ς3*W2%J;YXm|eaaF@p񥅶&-I%-٠H 6l@v FT վ2PamL5 7k69IZ,p `2,5E>BeB)iG%}d ,,D%jm@SX64JSONP(a)SZG`CHDcKf2xu4GO-](AZ5˼gt+zBIZ Xp5wM./!_-SKkaHEbn[Y3oAFߥ(jmUbu'[ i÷zvVhD=ާ'#\hˋָjXwTo>^Ag qMnl.[% Nm P.17G~CgqqDi M}wcNׇKFq;0@)uk `2+ ;81d) M0&P>ow=xNh[I÷NS֦ۭ!\cq>+˔cTbٟcYeSYw?jEhdžņfI(3Jޠ}zּcV 4Vj;~П[\(k}tWTmO>ff}+ +^GIhE}ъGUqp& )22nĽz18^}nQBĖ-TA$A@ఱOj($,FwQꤦ G)|0{/We6*mڅy5rxm00 N߮]N*f$ㄠµvdA!϶Qw*{3Xf;_ ? 7S<|xx.u0}U$wg8]SNJ+]݇T#B -@އ-$%lf ˁY&%Yp:^ctȀ)B[?]5#38E0'Ա$xӶUfk~2Yf/6H+T+]d1?d+E~Nvqi$ʼn߭T+G2MFi<"!,gD'~7ì鉕PZ`vS~|39VD[8_8{u wU^ኪEnjc;c9>ߍqDSZ!ӯ\AJiG\weiXɮ,Q[WKv^aZc] UWs T0GUf_%V~\#vv* (GH[wĎΌͳ|,9SQ̇\+GU'.ig֍\wwcRwgJ"S~F d\]@!2$V1ŭtX[#y;X"HmE+Zّ GfQ-U+vkby|b#V>E)DQ&[eQ^e،cw<{%;b5D`IefvˠZ}*"pNXCe\.jY;d!aՔs+IS}M+F|߸z'bæ?-IO>c{dDz'c8u?zAbƥB_^@5Gzك/ejB]%_bזq1vPlmt9=F^#_¬Pk83 `P_n6&_oOdł3yyh9z(J _]}7IxD2հָY37Oߧ<<ADTV?9'o&a4/Y绻7W7۟3<,Gw@m|fL_kl&ϹBiפ JG$JfRՈQr}vG 2R*ftWy8%Z-\I)i߯L5NPhpHe i3/rp {1fȅS;Hqbo׍:y=Ԉ cl/łA=+~ {zMgn2J5QTFեPp+^VC򮟆,tqX  vCAlzz}u-OWo6iz+#Dr>I}-Ia_9׌U9ZydANB 1qRk#׌V `/i}h Z(XSM/X^ eX4V<`J2)Fx&)hq /\==E\;?OqNY><߹ò:ë c6?^߬?.|˚>cAy|qrrl~:ePv]]h^oXT4?m\|7~!=6ϰj_Y9[_ E4K㵵$ǰb":費ЙJ>[ ExR.˔s:` ^\PD-+RB&u+C^@ʵKsLpC~qzA~ƞӕKF͓8cfAH Z5C"KWF 1NBͮccÞxGX;}-&G6!K5z_݂,kdw1ցtʨ;oRڌ:rұ-8Z Y'J04҅_X`,pWz'+p)( B{Tv1nij7cLwX bL}lVte6m۶,Xp,Tt{MPƱSJ`ZI/- Vtl3iHc>6?^hc:>irLxåGk(1sUfMdF);n VrJJ]s!h1.[dwD!.D@'c?k*{]3GMaw=6x?NüP1S;nv =ԍ'՟_,U;L5{%zћǘ)n2M =(/:Iϳ(!T&d "Co뭒FZЖCbk^R}E%+f"ÎoyIw~|7_g`ѼP\27RpNa J &J`LGvF9)NQm5Bsog' ;Bll4coJH`jٻHncW$Όx y8$!IwVHӒz.=C66`iV=WKU_<7Y!O-RFg 9VnvzZpy _J\b=D)+'ՑX%ˆ4h;;DqJ+8ֺ۴@1)BCʡ$4K#7 58iT]v6˃Ê35OeH^m62w[\rsJߓݮ? ʁKᤖOgB*K oJ eK!\_w TX-) )W8U5vXs L.^:0yCl5AfE!yDB6Zމn"aH c)&a&KwY}{K s%v\nCճ1 7 aI KnHX`5NPa qKKKN$m%bZ:8u_Vnc wPq{?yT2.wR'fmmOl?qze7y%aʔMQk/A`HJi x ?z ;p/ &IZB_G1Pꯃ_VݴCϷʵee~g ;;xpǰo. arl͐]cS?s1%Aѻ=n9%Q[r-rK]zX@ٚ\B9+1pZ@}E!W s;㍐B»KM0΁`vP^>*AqKXQ:)f`j+RjUFZ )"CC)lЩY*5`[烧 2T bʡ#QO?Pu$p`oSH̗!qSH35 !A^d3OTGj"e-5q:_lφ|ae w+5b10r#4f:214&$4 y&eSl&9nĘN;Rf J&pг[vGcg>,䍛hw[؃WWDW<]݊SnE=[n#FOR4%O:OjUcpǔ @vmpPY6܇;_ecnۭ')(nɟ0F ,-9APSN FDXq)CʉqS˱v79γ*wʵc'_<o7uMK)Q/R ~Wy|X&ػ>n _pƯ<2I0&}F)n-G*SbuzDrӆ٥87Uk2dʃQ*I+8!2V"J}I;rrӻI?8ƙ_x[- #!Wef7S3?2A'tR{xn5肉3M9LNF8GU* Qw jbǁ?E3~17;e:4S#UwyBOyhN'' YGg9j3ow%fL)5p 8Z 1&TEbI OuIc&irn [ ߐa'ZJI a/z RbˉPDaT(&r|J}i)gsϸWRJ c`TZN0@9 L^HA,AAnHϗq9hk)l7D(j qFȡBFI2,dFR3dm~'1WQ mVRP- **$ Ӻ]_C=|Yvu_?լ&%A-<LNF$+$} ǭ?_/w*f.ss6'p=Ji-zR-IB$!EH ~.š鞅Z)*#!Z[uO+=G=rۅ…Y0dBB- uWz/=R; /ԖPSXGJ{2!F 4eZ;/˂HB.JQg9\m`m۱,# !F06Z (hN\6/-`H > S %䱚9)FGI$>_;ÃLR` }LbE]!K,%xZ@yđ\, ^q _ɇ3`cYKTs5ya#HXKKlviJ$ PJߧuI{v>u݈2SFI~AKIvnfX--rKTA`%)q 9&CZ/&TsD'r'02en's&DWGI. =ڒaDa6hPG"|]f)S-h$d--keF3!atB;C5v(JpZu873qg'%-WxR7%ZTwj,YLd]nHgoݺKMFf>*"f-Z_J#'aȫE},pmH3$Cؓ?=p8x7MPh÷忕ƍB3\"f|9M#Mρ:LDС8z-tz뱘.F GQ_{di:1!EmRӆ1xl ?:Tv#}ĠQC}\)ޚSREhϊ" a!oDsl&,f\ bL')ۄ'ZFiݡg[٭ y&ݦW1_d=$A[JsFݪGb ckTs$_ {caUFNrc@ڐ^ ӥ f\ ?Anԛ Q)g)0`7v\Ԡ]цn5ʧwp7MO(9H,h:n:8xA-vU/RoԚI׍U!25+dwjZ>+dN_ 9|B J}weu Y,Y:m- DL"3w/N cDs(@ˏ- *` Oy#t{>ƟTi>]/_Ybt3QG3bUkFC[ NN{I]N\TX)8-0 94"U/}Obt{ʌrvߧ]Prvŏ_4ﲊo܈%gD䓠s/GJw 5 y:Xq,r=*-F`["2O'_H`Ɉ7ijJ)0M2-&gH ٷ& I R 2i JY_`K c+JwҮI͑Z vF3X#xMPOkqL:vdlC;D9C aLs9?(lyKsD:#H8sĖhfo{(SDsp65' 'Q=*4K F4xX^]4Jn'K4RsǪޘ$cgp&ci1ԼC u7M }7U]d[b}i]}{<%UjcǚvVQ|R< hޑ7Q55r\-䍛hMu ٟݚ&r11wn:Ji<s\~٭ y&٦" h$}>'>OqW>(&Qo:ӑe? !~r=m=fET*@xU^"wx]~-rs#6 U{88OW+P/n_?oS$$S#פO(קj@ eF!XyרGo<^-}ƺYQvpx^(WtI(TIziE\pĆ%9.?v$SiPl0Ra,?)y(|?zCN*4Ȼ\K?. ~%lO'!:\ܓfI݂_}Mg(\ %!L2X4Jza_ohRJ?BZ/ aNxb;'6!G{ްr/l"rPL5!8*>M%'o@ϘW~q.^.rLǔ2W'WFe4FPFM8mWT`dq7&}f! U3xRay/[.%a%O+F>w!E VL_oLB"o ;M>V/By"-ysu*W_Dߝy ~q3Ǵy|v&0+D,ZFn!\R2L(eig  Lb+ !'p-U T`SXTr-j̆h%sȜɃpo'|y$'fTsC ftDbAK1h#'|D@|{ q'k9^*sw&:ytwZFE=ʼcTfc(vt:LXyS^J,[J8_V' C*fCon>ul 2AґJՈkg2~"W(ο7':ie历t]nU AɈ^!\盕\8ŒuWaBe8n?!Xw \-l30Jo׾B1OUFNpǯNHP)MAkXN/uhAS9C\Jb(C=b#!{,~ P`qy@ G_HhI$R틐_hǒCdsHpZoޠזX eGU7<(˖X?E!UD*L-Z0"T-f_ztBp@ٍ@ʙon<_GMnTxoนMz.#sOa.3zxN{Z?-.No'׋Aϕڛ:h@Y֭?O!1S/ӻtܹ/m@Ld/-ٞc. ̞(dXNj?%SflȊ)8#O^|/P?YLx9MrY-˧|_*c[P+U6ґ1P0Ԗ%n#TWA(Ъ#Zs|x&3"9!cR;J^x:%8{02aK~t-Xހ'|S<ۛ `#'501,۪-?<~`FJXL֦_#밃~Ǭ]Y5DQ* oUZ\Ex1۳ XF<09ňBnT9XJڴ3>[ƯmNvB2/TDb},U},$;(ѧLcjӚ-n=nJbhXRX=b{ l9M!")YV,ۀewhfX?J}9C4!jjS34C)ye 8 +lPj8U)g)72̔:!(.MĆD 1M}lgDjoeRqqR:  |ۦi65)zuJq]I7 RpeDQ"\ʄ' 9-,–V1@98B@z(87]E3q j1%YZԔtC>ŽVb<)v*2#zGN$"R4-T4AD+鵀'̤ T&)$89!w9U8Ԥ\M-з!!hЊgދ.wqimpll1/"XND-=փ0Ibs[oBj{QZe;+2V%`m 0uqjT)D0HMfy$TaF `+yGIT1yBũi0)X {0%KmWpǀkjd nT+ kvkUOEwizqBKk+4U5lW1u;JG4%zCBIK%w2JN1hT-b֚ͩT[LSҁa0$ͼ`Wz fT+Źv;!)3pOamV溴Z/#lw;bDy%1) f tJpg]Zb_٣ p1$k\0 sړ /7x7ǐWw̧/y?M&cQ@%חX>| Wo>ͫE4¹cF[fOϼc1CO߹6B}l\.ՓG-ӽa[\M+(-\tc@VG &wB?{Vtym>N˻ pmΨ'%ߜNy{IN:N'3YIgW3-dL]jv><#)8 I]7ԣrbu6pΚ{~k M}B${ѷw.A/[/Svk, V͛ex֔CYv$(ImO;X:xpnR!3Rޕd yU" odzw6hlkح> =y3av.v;YW>zp>1DR_[d&g|Fq.+\/"d -zmLӶӢPS,k>b.EoV2ݴ"t ڭ-1u"O9LhvBB"\jlk!lZ3Z|\JVuUbmu~P~[]>mFZf2j&RaAcz + Fax^"{zB. P{ :XP~@a[nXRKib-&D;dO9INGXYpSg`ؤ:bçЍ˜avP [~]fZLJEzər0DuXH:erXHu`JcI 9V4͇f4W3ØrpQM[6aR`CWX6]iE~ݯFdnd޾핰-$_-k{T,X0zO>,B:u0URz_ڳViэP6G=}A]P?:6/M}TADx_G\?rѾ8p<ԯ+[}Zʎ?(3ٵtYKqX{!R !bH+=kH .Bxt\,d1Ψj%]kx.eя}KFXaM>^\eO9GLKŧ/&O@}2;6³x] u(^؇ݔin U@i DHș AEwp%(+C9|rFfE!ג'm_t>YX 5ioe!ŵ,9s_Xr! +|=xt6"8mJFC ;nvkZ$: R(etF\/ԧxk. 3FQH;G pU\HzJ= u2F+?ؽQ^i-/zpFU%: եܨKpKwF*Tߌ2w͕9<Ǟ,;fFZ ړ$';^=k0,tQ'#Wa w5'rde15x&40=j{kC@{CvFl"W * v:SZ:E ԫxHyAJzGl+w ){a#>iL""_sQ1Q!Jڶ]o~̮EEXoބ u ˁpQQڿ89dllrmlL[mLXRMڎ)mx)2^lf}0.漐IngY;נk5 YAP*2X(w aX$j0b洰[Tb8?f7  гUedՎg-_7N<ߞ$&(,;o?۠FD YdHs ?b 'y 7OB)|#<%j=_)vkf:.dspCY1mZFK&8XO7J!)U3 :D ]'P##ypn\J!HPkقj-Ȯy:IoSO; )JwUC@7+-=4cnk 4<L(f!>f&? >–vpjmgʚ8_A%;P$˷RU(UM ZS9=@= K 0>O%9hEȹst< j۴3anrRv(.KYG|Z[qӫb[r|X+g kGBrm$S`}6؎@5Ⴈr)lDj<Xhs1,8;bOUwO N.8mHW. dJV`&֛߫r~2ޥq<ـc%h;Ds=/gFzrW x'0uCG\~$r8Q݉d0$?B3It#=k9{itp$%F‰DauA$ S4>SjcuO|o:˛-se'фb=mVDg펢Nx-4Tz)I0H-aYl4mhyő('$<-'4Xn|8(30+ndbwױ`VXIuԳwd'PjRB"Za{y-X Nw =O(cܹ`]Yb7ю?;&a_ j=v⮻9g)_n+q^icٓj/-s֐n_9C$$eA7O^Hۆlwo~#I2g'B.4!+."pg[POڶ6g6mo]~\ F#:gZ̃E``(Fb>Jp%O%!p*`\4Acf GJay}ʵьf%r&+4FMz9Z` HecXOx_)l,㘶\:/}W97l+?T.O^\3gʱzMefL +U3w\*RdžݫC#S\u綠銊^wS\ݙ'ב,`J}G;T5 ) œ#6֑kث~xVMqλw# c??h@ř-c>Ty_g.[۲YCQ,mNILGf]c`~k'o=mf%*Juv=g ؉@vz><;溧Wa=82񁐮f'8"ךWR=PUt!e'8`n8abYA 6bT"fND.5h\T.w)~9Fbp,YE鑿_j<\WC|EY`ǫ3$ѢvO?b:/W-.G,̵Ϳ]1?3Yb-<|J*Tn^sBo(RbJ}Kp|4@&<r ׄf|Ha|_g҅H_ӎi% s9կ JZ2t8hM8]J8Pd t0tֲ \$D1 6pa# &HPU\s >nBJHңj\dʨ=jdG]ݦI1~ُv/0QmejP$uO !p-)'IǕ~iMWӹu]{t Ȋ@}nBw4DX5Nv:aP]w }-zMW,1`aC{.ml60s V0ɧ|H*Gjv**c+GPl1c̲kήCT*>`%)J6;1 k|lHppG._J$A'q]X yaKB rBnIS^y>5/|Rٴqtv$s)*"~լR>9Qe p ! 9铕I"< @F?pynʙo@7_]lv~_UwZ*=%1&O'߫ܠ=Í'fLⴻp:v$$VO?4D-?Iƕ 3c7mdC~Tb+HL'iBRM0u:̛ǢfGKpVl}:샭gm`L %681-jupֽ ,i=Ք808Ơx#XӺ\Ҕow]okdϱ(N \0ib,[xXXYryv9X@n v=h`1*l4QoBXDi,Jfj~f-3/-a_uVCEW':WJ_7ќ"{!FbgT' BxH's cGt}H'9KzDO5:l"Ŗ?e$I4/Y-W" 8.21AFh}yʨ Z:Oώ-o:P򦃯I͔f=#ĖxH)V9c^ሢbuK5/J ڎ'Gy êea\S4$1Цo2a,C8iƞfg=ANq -% ~VsXTDCA <\wu;*]!U:BD^D!ǼׂJ]Pi@~wYf ܂J 2Z)T )MujzU:0i ް/c3!)[fI$ 8QDH݋]#XsE_iȝZ JqϮ䴢ޕOI9:Vl;2J$ܒmJ1fە}v?y59ƚ3@dm&q\gD $DߧDŽKNA^~V;'[%h )Jmk̵4&LSoŷ \?4l'>ע;Iݔ yP^ n~@&[)#@2ֆ]`nFĆ s'GT7 fÞ ̙QeM+++FZ2dۙ.Bz/ -` Lԙ`۝ mWRc i;x&phfy88tpԂᔗBuA*T## -c rZ|da AVlp5k<8@n4]}@& ?f{O/=y?\<22utǢak2OJ,̼ڕüQH՗1 ,Z-_OTL\jL=&q%m7<ۿM|#RQ$e5:jc{NOQA':~:8E)ŁΔNhC`])$>_z[kA즖=JtUdt0q!a=-#J"ǂ!N1 ""a79@a(+˾~*ۮ4#9 JWz`瞤[QR? B1R5˜ XJm:BAV\wԼE2)&ޛuDOz\I5nuw[V]GnHy.wnVtW}2 xanrq)Yiej{#_K f* O 8|af*dڎgꙑfwHÈVwbHV=漛zoW d%5?&^X,ƝL[4{tŘQ~XYpL/|M_53?RXYOP[\ d!DslJ38n˻Uruc:HnZDgl=X6rؔ 6pmc1oؓ~ AsC)0 lgꨩrW7,fwdE7~L8MvyZuPm(P"j,76 LƗz>vJT=C܊z:-pH>]F4)xߜp.>_-gKfT[nMV\*핏g2P-gw_kf~ސ#aƌ*{#&3܁bz^ ]CTK7u?jA6-SoqЯO]ͯ&[L.e54~ g4V,JֳKq(!]f,lȨ ^7x)x>׆\NpktNܐ?Wդcv(ӿ,(| aHqs~_ #8'dBo8Oǁ OfR#xu -nrh`!:E+hƁamCB6I xjX}Ŗ #W1l H!81mgMEz a!Dslpv+w ]G tr}Lt-{m y&cS G[ ۉt$cAU7@3[@Wnzrs\\ `欄YgI6XVJ٪30oRآPf\)Jr(JK%V Dm+TUQYN#I* Ifey:k:Qzz`4[s֗>0_mc}1oI^zN,Џ2_qB,g*;# fܣt%hYz@TaAI84 kH# >| ͚(m=yάy 2[IZ3EdkBP,u2#eB0Bx.+)WkB) @G9*/2٪jjkP9DlOR˹قזe?׌=?8*Bs)*X[K)Y0BD1Ǩ ?g?7>[,MB1KcͿ߹*CܬU l+Ԅ}A5bi?~үRōJʰǵ^>xY9]mUxIkU SV;|<йu!ȩL= 4UE ٺ@mb+T; GGR dkWΗj)_gx0|Rm,]v1h$CX?ƏڏIF dM?T(rKE 1dq,8|: w槕{nus71a/¢: }=H ]Ne`O0W8̽jla|M2yMYݣ$ t.gosHAQEZ*6Z9H $C Nx fhHArV8ĉ:  ys&}M#+1aos }5i0}:u෋pnr(拫K/g;My4|L}0m\]h F8D.hQaeT)J+^2Rpt8##Sg~~vrbׇx:?[gNnq0?l]73x:"%υBJ]ދq:=+.RIKÍB0%'EY0Q[SU%IY*(g數X{'(=˗st( :&̜[j6GsAudڠ/z#ɲ*yHV~T?>AT190*,'irp HfQd~㈣2;׹Iq 0/!iC{3phJjxyϞNXT_>sC$z>O< X֌P0~- 4`duXZnKr(R+GGIf{RV7)>'YzNfO ѻns >dz'` [3Y >SP:lwKR 2@-J 8HDXCuHǫ}LڞwI=7~^ǨW4㑘[/#?#<8+1|:n[&z8|%@tIBm$DsG{a}&qqyA1Fq1auyߋɱ|8{{w&1aFW)PM[4.4-)O?dL {_ aLMo1i?r$J4+UTv,vrifbܐ]^_bs蒦厮V*Z1fXEʗZbddEf8v90e љ>Bj5Ȯ< |}&dƈ<:+߈ү߄ben W}M@b)C0 7_nk$\/.Cfo.4zG_vfחR9B]|Z,yIQu/?zJ&.nU?>ۊ 6\ʎZ+᪜fq29QQ 4lw=ܺ*[}0g+QF]#|S(4w\UHOT\(8kBFO6@>rz0GQ8eH@aTp9 tC`p605pb77 @\xq- &~DJjT#<6#L;#9 .X)B! jȚʒE%-?X[/TXRj%GTFy%,㲐,\HYSeQj4j3^j ~) R$4hGQzۍ#5(Dz*UBHQ\@Ad$QzwHb] d`%zSW o[Ϡ蟯?jש`idD4د}r%D WBXSb)Ds`deuTZZh[ii$Ha8SX&'|f"i S3g7鞫4etЊ~!wg3i#Kv7$x8vofHfxG&Ds)No# (F+=jg:ogH='>FeP zOTVV1nX,ƭTŨ,n>X?]z&]zzrw1Po(dH+KR+')GVR0?LG@#% ILӡáiu(DlKju&M:>c-$u6!yj<~ JMR^ꫛBۅ&M!,UnZ9 M޽aQKY~TTWnajUh:lfoλg{v|cؔW7n/ًz_ ox8ss&rW_|p3,䕛hM]Ǻly7z\ĘN7R!8M)#M4˦AĹ$vn:1oxC"$rMn"McS= HyH`Gveչ{}n򋏮C]ba=<!?A!NTJ{Re.$od'EI~eI&0ܾrEXn>ߞ~lDHL/۫o Ye\ C"_o ;W},hK-@=&EWVZ $-C g2M!\j9QƋKi䴒Q urVR@zl"  #ܐG^ 6̰oՂ4A /blsg_ 㢴5ԥ,S suRS͍:u'lQֶB "QFjQ{*>[Шg-#@o 9/mi䖠^`P?U2ddڔ;Z`+<̱mEplaJ9#\RP[- @%+-2 PF81)mἼ#s1Eb2VI7'0_"*X']pw^}DrUs[TBXqV f]"sF"Kn0JHF^9aׯH֦K{3$eG+D歷ek%1>c{.JѨwK7po6>s΢AO';73+&daМ &4jZ8bq(/W䊍Jjrc/*{Qit)%zzXcbĵ6T)*G(d\K8;FS_]xJj]DVhjF!@Jw)iJ׫{Z==)c^4DQq8"PH) ?R*l0h4w啣W7`ITT3 vU}=İ*_&z2e#![=IbH=3d H́Tl_RFnػ6rcWXzIи4nqnT^v˅-N`HIë0̐ז}C4\FPK6ɛ Um!T m T8$6PdBD;mޫ:(U[WN1x^I%Bygޭ|&ɦ87R'U9u6혻@dح{{w+a!߹̴ٔ)fhVyЬrz؊f2 vc dJM}['("[?){qt.UNG]ɣ lo5֑ڛt0<.YiySrO>.\n|~ LK)|GM.d47^{;eևYrVP򪃐9k>wSrS49WrUdTh\pK=TAUY^8c|#_okKb|f.Pͪ.򊘦vcR4ѪZN) G ,U!sЎhĵ a =R_"?I D)й{s&/v% p 켊z gULHpyS'9x^Ay{ ( F78jce44,l;Jv"p dA4U:CEDPtY> iŎ0-~NI31ܥu>/`jRPŋ n4=ճ]k.LâТyn }7Vvv[qom jW2L.gB! z]7&=K]v;ƕBL~O銄gag);)UʏYv!Ylg ԐaLvADp! #7xLE*0Ucs'ZVPuJ([R.Z={o1M*: ,o._< k ) 'Kq8f~[ 1P`*TSSaAƍvqɧ*s۪ʾ;4O#kYن@ W15Qxi8zJG"K"Dxc^yA*.$*8z!ĘyJusn00\NVg28i밹lt)a_LHȯiܶco=$ƱcXk.pM+ѽD 镉lri×f&~z6oz6o{{94SnRr/%s1 RX13Ă(Qruv/^^Iu*_ׁ^Y t5G:?,z<&%˽Ϝs֓apI`oY`a/5 k<=B2ﬤJ =A)\Iқȇ>=And\_\i_[Yw/ zЋ5w z{C]\zFODz^lW[ mѲH̃FWChA Gt"$!@ .ٍ`H0V7Z V6 C /1Ngvԙcc"'_Y׫|+\#[kCRCGB+e!=w[-sJ<_$Ź>IDSPtzm>+$. ;n9BC^BDENn*brz+s]IVM傌R|RJؓIcA'KH4_8B[Xq5"E X>²u /2};⍼{ "6Z /;qaZ d(~]+V@1Qd1s!URPJ#*k1\qΌ&bd~γ={2 l/=[ꂋCOSU:$[n73OD01V Ovz'Fq@݋ 3CcxvuPJNcR`m1vE %v7E8~LRL"ݰ$kZ@qPh2Q C@~~;zPE,::MbI׋:n?M_'[={c es͒ﺥh'BPh{^/GO u_p:I%C*xGTHsܚ@D^qD.0(u4K@uH9UTYc${qL>J) m APoM4E.PC WX"<%5ͣѱD)qsko֩Ls[84^ .3BIw$yT΋l>M F>gOdz$3aǗ4r)yI%v@+xT٤Qϵ[>h[!M uTy Έǘ ddÑxڳL:Tq^_#3~ADKqՕj( $if-YRJK6JKmR`#*gu(~X PKJ̔TJ_#J (\*nFBhu~[zvXЧw0=,-/e 3={P"S+qyh<-|^)A6Kzů>{=bWqQFuI&'=AN~N_]^D[~fwױ}"pm({՟,+ ;]i{ кL1"La֌ZMzvdj*aq2E2{ k2c!YoCBj*9 |y~Aius&wAwNO2 * jnDE (Y6k2(H v ݩfU;f;mIL9mm. {`7-4Q!T!#:py bns=>'_}SQqE|~߅(z:VGscOB3i Po& B'It!IDd901qC4̺(8q$AQ܋TCAc3 0zaMVG!Uk ~dΜ"sdD 3 6"l4 pZW1b34u+ 3X 3Vi-e*Jc+1CKIK3ϝaNaR1-[$2[I@Dl?9=`>夕f!!r2آh Y A 7S~sz{.Qyc8aSszj5H!l%{@j㙚MfY1)3ٛYb?)K6.$JK:8/%Et媠 :o"]gݙYCTljxW^jJ޵P0EH6 6l~/Y\(lF8%)M=1NG+ꔴc58zR1ǝ3O%kRNOr?@U{2i{ KsM8b,ŷw( 6,lF +)9BhU0V#Ft8%oub}'1TU0 ͻ\ /[ qh K_hH"2UV-}ܴEԥ郞f[,NW 8T]3Z'؂ NA,zs@:hhjR|Z=2sbnlxWj٤JC~]K&r}wy?u7Q>Ĝ^yLt4q}=4_nn/?yq<\D;^iO/(C,eQ#D@ǫ>| *Nb$q尰KKŶBK1%TA7r( [R3NmϫB;%k('oYD T|쏇)Ig۫ؓ,*iܤ]VYVj JW 9F(il1rs%C 9,gpyJ$Kw_P6ه[?{__BSV/G?D"mz_qO˟LfH a"DUgDbFXP" jN%i!J"0NPxc= 1,ƓޙV_~T1mc\7kSGufXy?na oyM9PP +  rl%h`LXkU ydQH^qiW57Pq"cw N>Yi3uBe\r, T;;mX-r1II(:OI˜-I*K.[~;]wk0 Iy>NqR >z&JCQTRQ_g%bQF;ޱC#ncH9n&s0F:}opZeC Dz}sJZia8)qUi]"S-Rѯ sic8$L58"1$ȀGF)SՅj'=jb]z}s?:$σMNO0e&G;E&h A$[}U SMRJ-!Trb!b|rFxk眂3V0MAQ,cͲ6 A5sT1\rV$YIp r[(we-&E%*ڍ ݍUCga\E^8joK ,Dg}jgqu5πO$=d6\1XvfnU%Q0*+YT++C~՘u IUkJAOd1CQcԮO;XJѢ<D2D?pEkzY* 7TvHE!@ y 8ѦQ%J`_ p*0S<^%zl0QJ=1٥BjC`Id^I[p(_xbqf'~_4;Ah$ \1N~EŇhV)MX\W׾B6@Rbgtm1z5h8Pd|4 hؾ'r=K޷v% ȚP&g_ z[iPD<@ipVIA>"|O5Eͺj}|__җAVBoLb5D!E߲bbu.ԜI $ؿQf|1ɱ,n ]`I]u%L:78_UOvc[T4d.a$PUW2BYN|֎lƖ;xx{.FlDd6eݴیC }+ShZ|{eʮU8Ѳn+n=_6l- 1:OM;1deIڈmEC *\JI~Xz|PzđΤȿ=!tIf_H9qQ*RLwƹ誁#[GpK]}~Ԍgg ,ϼ/3﷮"@񓫊v$08Mm3yp`Ʊ Of=كZ{;g~_w}V<u쾩q70WӴnU-v'&ɾMG^4* 5vV vQBl3].mi|k?pJlp>m{Ӈ[#Xe8BEb9Wo#}FZFAtC.N>%:i-Βq. hW:^y6@:7^}Fas-CcwUCrՠUk3pnLc<]TI5UIPJ9_3bi[wzmRtmKrX/p\W^A"b\:8ÿc=0HYc&oS׌ǞbqEN0>$uƠQ +ع,똘}Jں}{P3Q̶C*=gk1OԖdq7ܡ)+$xr }$2Bd{6Z1 P<)hKȁ1' n ε frA'0 8C\(!:X!gNդel>ޅ=Hɽ  tG:LXc `PqO?څ9_*S:,9! p9]崯$ VoEMZځ>xeцD™<[ TA$'r8W$.N"V, .Q $ck.XjS"&ƾ?-@-Rhu fS1.@NI$9 "rRYrYR V؈qn%ƃF7 !+3@&*k|5Ei#- Tp7J;۽S#.Mwkm6UQl .Vz*c;߭JlUp[mí*ҭ6F]ɟ4K?svӾ%?L[׺ng~Or?v[9wO9b#FRKh/4`Q[^u@f'KD8qϞ~w>}k>#~='/K&Fwc>z]8~>5X{HleB[]HZ==zF?T'[UV^OḰx7?[0H{<̋72<̱ߋ"M>|$cr.fSc{ث~nA˿|Z{(w(H^ FD%Z!qqcl= up%hUp;ykTK8a1p>#˫ђLZm^|X.vi3u籨s["H9}iWsk҇gs]‹[7-!4?׍aUvc.K=j24YޛBuXiX- #z6A*rrEXɒC6%՚ x69~-U0("{URcU1T|6%e-u/ú{CΟj9 ,7$o9."vXkm)n%U?y?nR ̫$w:~U aiBoqEލ3 {@4i{Ei8mƅ._ei)x/>0NT`HcdȬ3pРxҽ~u⡖s̞t-OCܱ1qu qn5tzv ĺOk|kVb=u=q\㙨WK=bL5˺chO֝A;%u؉ ~ubYð>TrXF>;?L͍̎~ЧXnS^b gooNOe?} {/apᤍ= fٖkL֛'ö3ނȞ{`/Ӈ<.m'.O-s]x?Nι"yߣR򠾸o{s;!C,X5XiSV.I³7rŰvd|wܧ&K>Si_벸/S=^e 75xPLDΊN/?K2-?{iiG2{D`wN4ac5Bzzl@5]$;+4 :,_=*EVq|*5Z!t1#ᢪ)r̒cV=wU תr\b }!RT*H`љZh%igǠO@Akgg(D&}`Q@mg!6~P !CQ~e1G`Q8F2q!8pp]ozB*1_7 >q-!ʁg_yʜ+5g?p< 4ABj{y\23af=ޅkYZq6z/b-O޳-a7sG &'*=G3DV6?h|1kk]18l }e%*wsiP^=3{?ݓ6֜<143?@6l ȆMneXgQ b/K3 N"ܓug0g5]v]|0٩=}Yu.nkFhөBtO!,a, =+ZB럩|a ${o,,L2*\V@̺U:}tK3vJN킁cXkk*~MϷ |m z2=Ed5m0ww zb΅eBЙ{rP^<X}ɈK;q#(c/~}Ii>o\ cu>JFA:Pq1-%D;oy:[@<481vodߦW0?vh3/0 oFnY&;!_w`[n~נx}VY>\r檮Yo%=폁;5eRz֮tw*f6mtKڿJh$[Z_x&ޡy}O5s jۿ8mymW6ɼ@(#g%A D9%yOAɵw#^A{#3HtLK;֪X,rP6z݊VM ZxmtBn_So{ 09pt>dw]i'l<Б+YNvFr)Hw?+([C'vEd9!۪[WYW2Y/Je48`y,ʣT*B+A]VqpYd%YRج2uGrCR'df_qyB Z6̳:Vv%KS.5S7$89Џc`jƦkp8trCZœ$Po{({O(*&{SPZj嚽p ЧdyI1``#"sM."Wc%+Uv8v Z2@խmzt9\h5AVD}Xm=>\|`;3{}nN9l^nKQc4J%6F߂4_?;)s YGz>K=SQ &obp67S%I7&O|Ovn>My lcl<$oŬ =`ҫu^l(^dtI;~tjpt2$]#0ɕ3[InқVXF3C R-?k/H]ߊ?W"ᅣ_vJ#^L0.2`c117S.K.J6 s 1ˬE$;?*#~@U)#9͂r֨|MO u/;r.1r IS ݁oH>%Qs D\#Zn ؜!s{ oWQvlwoXTJ@!٘e\M{mb@ёѱ!X[Wϭި+A,Sry5TJUÑ='|>tA y-Y5 RM@aC[^PV04]dqV 7 ,Wֳœ599CXVkfD.tMNbuۧzԹƻ^U69%c&'ESTQ(d@={.zpJbgȥIvm/8抷[vZϊYU ;D4\N!T>q.rou1kK/{CGNaG$C;1I{س~ -uk xt%WT D,1J*1zgi0.߮~LBьjJ%=G{]t؆]9DB?`A٫yÁ-y!vwU!^{X6@E%慚P(ṍB3!=Uva=䨆s?_s 4!\ƥ$t׎2T둏zB106pQBb ՚W-}[M33K31qg4FEUyF)+_%Vp9cP]bBZF R Vh?[tpk$r6'=ގ^"rf5YJ:YORNt ;ۧꎷw UV/ܒ1SBF?={!SWƔ!SC4;$"פ#FٜNssiӓT I`|* h^m%vqF\nh+ґURڊ`[7J]s`&%w.$ՑJCRV}1ԏ3_Ygf>%IqtTz6B~75jv9t\=&; 3_9w0?}H2,9ԥhɘoμ-"/*T7J,Ql; ́"3:=tmwܧM{ʨ{ Ü/&4hvq0RGe3(gx8MMmǩTQ/S־ c MpC:>e&:5Oj5N+kz5n:~mu Ik-}\0*u| AqE;OQzUѨn?nd7҃MُrM:iVWΪnq_wى]at{|㥩N;OޮLt- XL{@aNxJh\S.Ev^ˤg( ba᎜v~,+k̖ 5R( B)f݋{=+V:Z-4+mflmm8 iڿKJ=##AB e bPywY%!]$!X}f24i 6d,>4iORISw~FE;7t!c,RnV JIͽ S T;Mr!sdM[>ҢK{*{l:NQ}O5txޝ2q94reӋ]T:q,E7:H^yPZ.n\vLlJQytD5FŮV^x=)] @JG )EMё=CFI({!х*g-5΃h %?}ِJم 9*YT,t1Ⱄs v\֊Uтh$82NM@5*GRB"F'>*!Ve; *:U{2T8*F6{0{aض|%92S(zp*#%o#uK8gŤS&(*aT#s%-v0` G%v,)JN+@5c=_)>Ro͞{kȦUeEC"9Ő bs@*BK Co%IGYⱣ{bCbF6 _ev>Օ7jC4ҩ:Ω'tD0nx\evIp4pRT)uWs/>C9I듵8Cv|"=jrxkXZ;s݆bz-3hh9Ή˂[U7m^jq:m.AYBQ ֬G?G质ފAbq΀v/;p G]8Nn s=W^;hELkvx+Fw30u'3I/n|uHsm,v3"8ƝQe*h:?$vF+mM$mΌ6mNq]ot@gJ/r-icG%*!.E6N^e݁oaJw*)P瀵 _ܳVekˡI\7_{@4:Y-&'+ VV}`^ef^"FfTܳ ZXQDW7Jf&5)vk"1#3h@VA&Tk1cJy/VxǕ/N~AzQEJ84лl16f1hP/!0?HX$ֳ]> J~9H>j}>I 1x0Q4YT YZTVmg+p</ XSq~rb.o'& 9qQ}O5hPq\YmnDh5ǮO'"rD[4oTU|;<72VyKL ڛ2̢)-BO,krZ\:V 7e 蠟/Ew0_e Sjj]A EMuVq];S@5@+_F[5.+waMk47%Ʊ3'òtYw>l?N/rj`YǦ(WԸb|ٮލp1BO;Bõ[2f5hgaO|P=Ftj#Ԣ'1wzZ#t:].jѮ%4 k~ QN~imĕR 0ȨNI5ܙ&4Iѳ9K%_2do3Ι}}ˇ%{`֪UXcT_SMe'X鳶RL}k0 6lQf@xBUZrH.X1Q(W^7ZHݖA^ވ=4J"<]nK~ :. 뀴 ~{ue/-IJE>\ۿyQc˭c5Ӈ?OOןm=dwߏ dゎ$B.iRƋx_>5ox}ʕwl7LVNu-Բ|c*1g\__j-nwxzm襜|"W80̽j^YrUxkFE۞Ey)E~d]ݷ=H6i+eG俟jE"6#[X k[Wd]ȪZz\'0[j0\<O廞"|ULM.;hP@amj|㾴ylW/O>HQʈ" , x.|) _ZrJfjXM`pMk|.} N[q&H2<|Q? #:d/@ 3C s)X˽Kh5Ө=2pfV'mb)P V`&`>X$,P`Nx/KZQ`Q "%Q% DZF ^%dh9F>phSzG58 ȽYtJkt-kO2V ) P[fDIŨgI*.Q)r$ei6ܺO\0.Vֲ|F M v!Y(m~ZP%4$mh(mFy2e *h>ύ{_JKɶ}(z8M1~eCb+_t.$^3k=X [aN "-z@@3'ڂ BaA3[C)fWY:U}\pQrc4n.X1 %GF-hf],5MULX"3zQ"9=i_ ݩ* E9+5/_vv*2QΆ ֔Cln޳HVAM7o\f3]xYB܂I,K1?;Jx7@qL9[D汏"9lBnoޜ^rOPޟ@don0> cP|$3l4dhX TMA#72CGT\Hng՝%XIז ׼𬁳* a|M)x#.bٹZO$}69Zwu5fy۪ew!\_4^z5xSz=W|# K^Oe^|ڙ*N!l&%|:|~ڀ~:/37h&K6@4 6FѷZD>yKpV 2ҭ)Sj\5 i]\ "9vK-Q&gZ_j4؉'%uW )V7Uf5ZŹlxma &-I8xy,9A=|ӏ;~ǎ52R*;ArR|~zv-c r$@~> tNBo9}lBb'dr.F0 zP!fVx=-@jٞRkɠZQ'Lq/w I6&HIҤ E!3DZ r‰غ"7;S5r*FMsFRZ{"o (8 a*M`.y^#i YsV-G +ed%:)l<2dHA Eby K nK˱8xeͺC8%Oz AR+Pc?J -H>P 3aJ@]%'RIhFq2*cq!:.,8ڟ9M.0C3xV"G6>Y㳞 mt˩y9zD[ ALH^{4r8aT%ӇiO!mw|`$>?؋P31؋nzF1e;3Y+;Fv/_w,;~:,9ʞdgEHKŅmLr*X;7?㪺a}p)⪒1O1lX-8%u1 ('oIpI**+>G-Cz-*2>@"2 kkmDH(2ҭcAk^j >ϩo1y p@iN;. v>#De.>BTӨ0ybkOojLvŵ(ViR#y.j&,ry#: f j{|(i$j6јuv' |6CBsY9NYG-iut M.PjZDLmu-y޲,i-T_)3+99x9+I=9ZHN(ρ1 9Vd]tLBeJyA[L)I"Zw`0څ6Tqx$<Ҕ V{gRԡMzXoNAʐ绦B+uOvX.wwz#rL/8oŷ\u'_GxIߜ &tdms_9F#߮;9 c? \sf׍:>BYw(>XP MsA}؀+l(>V {gTHO V [y*$R|aȡNgv~gcцD+cAh&L;FE̶o%Q28a%?FTac SB9);YC_a*lG|X;sgo)gY+_p#\_4^z58\8߈- wє`\dHߜMd v5Nfy4 Ql=|(ӣ\o]--57tg~1F(p;iϧI.O>tsWut͕X1Y )AOv< dwxy  ( Ca|Ɖ{fUŕݏt$/Z}pYpX-*: Mr㩮oV`W @@Dtc}fGlsg*x}cȄ!No;3xeJ>` 2avm`Cq CYD7˓ NVuMn|u\^ V|8C {+9<ZϞT۸r}6 ʧ9չT=uӂ)XLߨnNjN f݆78Z1fW>E)GȻtLh݆bP2u~ƺ1!voغ o'/nc̐|v-;?‚Wuk+ۺ|BG գlhB* m?j$1|B#є1 O/ )B Z^!0YpB2O)>)zxBJmQ+o OϮbRVQK >ⱻKް.~3>“T0^8UIYCIAKt;.BkQ(n5<+w5>EQcs%#l`ԅ롐B6LXnWGgzq%{}e},&鋏GEi'd}^7?>Lϻsv7WW@Րxѯ 7Ȏs?~bcRNcп&5&|qM $'ݧos0ս6;n4sX0nvo?QB*i-Ʋ+{m]C Gxeɂu*&3*X`'t {.%V{+A3@H 2%>k8wlSwzYl<η9.G`f4컔DαR,  Q:ϭ|#as3kb\5!N7y7joɍ@0T=fusq3?!\$8sJM}g>SN';;u^)quyR%e[y*x_6[SAC z>.qx*F8h. % D> (\֌qrvR$4#~nk?K@7oUnn+|LBAzggs|d(P^1`(=x iqG t;lV ,R]ay?AuFPQ(CTJzf^CTF*AhC UF[$:h6.J>ZKz 8qד #i.d 0~Rq' -IJiDzjBxԦNFY08\3nH$6LIҡbPʙ:h̨ M4oxe9$5qK\uR^Ck+/8hdI(zruA6Aq/S2:#P>V>yRdOCc"'4D/ PCyLHPYy\Dml '`+_2B0/΍$p8:-znB$B_8/UG޸9LIc0`6ӌ PCy:W|ER lnH2!œ/dhp_cv V:ڸpO7. B+n nG 4Yz^(tmu2@QW=yKFئ!mpsUO#aǚD,2zSd8w|f(vl9/ ~3h&؍Cu߮>TN-Yg|uRflK;|uhm.gDZ}A_LumYMD[~]q#)ԨFR3TjԅTcKu1zOpCK'+c@*kοg\ЂpS C"\ro~(μ k|!P!Z}bf*+Hж( *SLA 'J˭*b 4_)Tۼe۵>fCg]؛]{ 8:E+؟uٞr.̾WKo[? WK}^%qǷGn2ի? CinǷ^։0»u`ʈ61 ڜշZ h#۫j&{4S{BsR sb' hUzlmIc:wǻz3-gsPtF>$9>]hYU=Jz!%{Ɲ*圶~jv=4\óthy46̉uSw&{8L@UP hϹ|MzƀPry&bv `sFy9Wkmݺi+dcqACFX2;@Trƪ -`$̊ \+JYe'_QkɈ'‡t^< Hu vjf_Rg:/]£dn`vSf9Dcr^< 3λ#(a Fžrk`' zy 22#89eUzJ4Dsn s컶謰$הdkEc26Fx)!,ʐ\4ځ|P?_(nD -wj˒8W Ap0r"HP{h7I8x`LeY!,\v3R_QijR̼#GjA6х$pgƟPEf!Ai|k\@$stpt{'>8y2*yäh|&nhY:e<ň^Fo9!wɴ&($bhտb͕^dlq<®7un} IViZ[rvWpWqkn|qK"j2Ԅ*\c (fcÿ&-:(y ҂iEnKpZHQqb@[{mIUsgPs*C@y:sꄌf{pDem˟sCW$1|{H\~r9B~!\Z!E/c*a:>StzOhr8,t$]H'I|;)zGTQ%V^am1! r'$ );eʳLϔP^thnШ5RKN#O.WE!;PC9\q"JPCAz`,2f&M"Pi034rn&h~s1׽K EEIr55;/:j'78K3g6F\S˄L"טwKx:P|2/B|]/~ C"(jtd銚wR^P5kL1mP(v  n3#F@ސ (U:Mk{;ilk|ӆ!duUZ5{=0TWg^yee#C?ZtOu"g-ֽ+"Oon9zOHLn_xDÂh 1lC;\ț-ϸ?xɚmb 1}ZpF <:uo`Ѱ&Y 7;gWYRVa V{&]+*i#3 |K@Ґ 1)T1ѹO HrAOlFϪp/O6|+z~wNJ\3J6a)R2OpJ 3ú` r<ݣ^;w>J2/GKZ=?w{m'E|^x̤r"sVF|EPId$wqD-lސ#@ p։9QMaD ./TγUQ"TFlEa9",5?|uRUvj]e%b{ɹթfx1j>aE 5uD+$}"5_b9m+va?sG?wq!Z\3 T,a՝x6 K jS-qT]d;pbyѲs[Kgȶ=&hfߊu 曩|ɽGm1|w|?l  R3!wvVŏ7{b{LKKφw m; x9lznWO[6Bwj}AVW!K3b]z!UYڭ~Y`vkb%ZcM)Xh7%J%ܗn5etj>#J]0;VAvk`%Z}MI_۳ u~pL@gE {Ol:2!4N8RNȭ D$H)(=Ru6"QJKg+AX4Z\X~Ô4N5=Nr3d,0YUsvV;8`p2er*Yic1k?2eN8[j8bGe&TPj'fU@)87 smjD !bp@T+-ɍ#rbuJ f VI=~G. 3L' e"!Ԉ%[?(MwVz{  DT۶EEǗ}x{y= `?ﴎ4)o/K_(}&K jyZOv(QBDZ%_ݏSJX[꫾WNüݩ{šY}[2DT!AF4֎9-vV00 '`3}/ut 0僾hpEsE5N'"ts4nX_7B V?ؖ3@[|nUNZ֖SrN]VChmlMJDj 2eʥ䴤f9S\)bhBbSR)N%A8ZpVZiB |Qo`FA6lYTk3;[=L(y',R9]|73y|c=F_g :krЫg; ocgQLzptm;Xh%',5bhF:&Tu\ Yx IH"oos% S}B\_]p!^tZ(;|pM{ɾBPT9{Pa>h dl] (cnKo`Dڦu @[F'y4+̒|2k)iD bjfΟ?\EttG}OƧuȦTG=8K;7:qoݭ员jN]qXNm]d2 1y#CΏ}*LWf*,IMybrd},͒3xH!RL%.U` ^z"mb zGteD2ì7 iANJ JP|x W E]an &p#p׽ $j xap/73 AHA-Ʊ.iN>id6]=Ҭ^~7x( Ex-9Xl_L<~ "DR+?pO_74SjHL5bۃJ NHFYke+xNOI$9wH^SIP%f=Q%o%1=yS囌#u5P4qF3hwVЋz鬮;;+RJ"w1nNpp7ջa~ŮEšݜd4) QKUusƺ9E|yg0-5ƶĠKOVhrͶrIjt\Nq`5̥SKU0Ճf.M؜J.;fE-Wަ\q-̦6wӯdO0)K|3Ӊ5}1 {qzssp0$.Yv%05;F)GuqQ;I5^q$Qqyg$bt(zIye;}`GX5v/VO`Nqdt$ KJ8cdsw*V]CO&^b3H*yv/r&O==7 2^W1|hsFz!đ\#YZ/_j}}˞44ZIxGEkj櫵W{Ֆa t.Z+ } _0\XPZ.nx] )4*y==*F>{yZKk0qC[ljdP$RsIj81HH[n(GN4Xc˖1sTf.g"8:F) (x2,A.n2IJjV"Hu}hJ49]O ""ptW] Pǥ8J`Xh;#KbLs2)IB=viD E*9➾ݒ=uO)$w H0iiPˣXzX>&gNVKE,2ܥh" *a3K}ZS:TuYjC`Q4(\R 28Mi0M1247*M-yUΏT\n@*\ ܇T{)MB{\wge/]WWu۫waF֮0к- ] ~j:?>S2;̍ )SKav}| QA:*+Cz(U1AufRR,w[UQ 6ʮksW9=ד U]Oi2_Xr~Vau֑ے*ڃ_檭_˜R<<tu횶328Ioٚ WЫ0f'Qtk L##H],7_O*%H4a(n ayCK!N&xv+x`eqf!7%;fcH <¶oa-U]wP_6 H*ՓzO[\u.ļqC$ʃ:QZјQѻ s)[.nQ43=s gIh뀙1s؈hpj%sABrH-Ԋ4g*2-3R9s)KeD"Qt8cX`N*8Fb'$#N82չ2BKl`N3q-K |h JE(jr>S, gJ=;-kPuwB !5ˠ4Pk܎,/Вe+~OQ F*Y-B'oOޡ aaY4cnw`%m-Ib>q6ͪ}7LpݑԥB7/~_ǏzXhJHDr>_7A,Pz.V!TuAEJ.ןW3-% AJ$1l"0art^r`㎽NTERzJ^1OT†#D|v^ \)Lۋ!#`M4Xj_ƄeܦKD۫@RwuLXLX@9XFW"Gg q7ԏ˽O2cri͙&9Ṩd5GIG\NJħ=-`P%sƈޑ+Hk"UZ9 L?w~WI"0}$eJb|qy=yS;QX}GG r7ZB*گݙX"(OO{Qq-Gs[vUâ gb"08ހ.r´oMe!^Л!H5>f8o͖FD!0ʨ ,1QQD/^Ӹ%D^Qd^Z+ޝl)ΘWűbm$/Y<9 `Y"X,F:ytiDW"Vy?uf|rqW|*߱ɜ'(,aD\%G:efjP77O'x豗8ݳ d^YxH8j2 |Q_aW<َNB )?}`;ja27|tz/ >L@–2() 7L7f}.znϷQY N$s0p ɠ!h 5)1TZb22NS0Fr#ٿRevS5$d沇|COmn;dU*Id&3%Vvf0^; ig4;Ծ/> @\z_"p:3K٣_s|[Fn,]ܜXf3Jq7/BcquVF'q ]W-Wۭ1np8̃\| _3qc͏n;-?ߋnفk"ŝ7G~:}D/!qn77 {I65gm\ ABBh?'Q<.Jݴeӂխ-5`[O4 *Dn/ V B S%aMJ,E*I풛YDKV_aH7tcA8MxvY +TtKhM:04FV+sdұs0[YܚO PKϛJ};,^Nؚ$ra5_"8g6#QO ȬEN]m@`KY)'#; p|XN[) ]9{F\6]3(b_0'dκ9欔֎lw5-IҊDaҴѭ8m/,/aJnḪGjgTd#hQk D8>o*EӖ55rkTjm 'Dx85C3NgxYAhJO*(V}a2Y̒Ugnp#b&N2SNf 7tಯ9#~@3/ދ "}>K@#/(&83a~Y F91'{YO q`6RJP:w*&1*6]`7`XuR"g7 fM!#oYBQC4+l|jO҅PtkiBp+J% T fۏޖ_+lD BSe0BhõJn2&,kжWE,> BA$%}S gTp#x-ՆBW(% ^L6 4ls^"knfeS/e0>1n$EMn.u%ئpҒAL-4n^Wx!g50II&/Y}շ̀,M.xiDG=/J_oN%J/:JRwQd?gdUj`JEC$5A8_xF&mȎy$es$嗬[ d|$e\WDVqrЄѓc;>KRr<"' B )7dcLrd83 1o0xS 3{M}.71;݁WV)Ԥ:|pg>–Ob]lםخQ$12x6 "V2by 0H1'`&!4(tsB3NDv̎^3*pfTNtcUFzyN>mkvϹO9l2DzhH~ ~mMϗ-#4y@r4I#a욤;EG-skb3uCtФw 8N[7qL?%(&k4,3z=L*u]PuƖimKkL% `ӻu\1k'QVd槈U}~|؋"`t}MywdJOǾmH D[FiƖ3Fx2Nee23^Ihi؁ˬ vf PDS_QcT<8 HDנ~_0$t.ѤdϏ >P%?XşG}6nu?4C߬ߢG﵏k=ܭ(-_lW hٶUUSŲh(C˲ʶmV?u=먽wWc?ao!ܯP[+?|qCGiLk.c莥tSV^Ev&/`l a<QQ,7QӐqW-%֪fY Q4+.44U5E]uYBNʿi*A鹸Qqkh0T'\W-7*Vzk׽O' $h\,/e}j.(.|Vd`oDp_2kQ0%VCpq/H )4yyY.n |sq~R s$$Ihڒș. a6Ck732s&*u~S\4V$88FøAko4'rFՌL?=;zHDA\edcp"ri*lXN͢w-33g]Kq㜓 l[I8{MӤY$Gmo&k7u̕.ΝhR֍j8 oT{}'fdugwX9c5 %%7ql'9h<[=߆*S Gk$3*i 9 BQ1:lDII *{(P:KF=[bRmݶk7?범V-]7JO"]^ 䃧3bt>^/:L&g`%.]lK/>SIN6թ*kRQ B9LeSע,hjM-*MִyJS޿Tcj$unO`qiTd#֠ap>xnX)$֚PAHE0>)Gm`#H`l(OêlO(>Vv=NQ)p$ h) KV_Fki_4MK$f8=Q* MH kе[JWjSƷC׼9 \o!Pna">4_ M!J3F;{e`FhRLKe6FCo2⯿FHxpmi^r= Ak@J~d7E)aG񔠇Cf$/D̥+TX02'`KQJa8xV8M$,5 KE:KX/9S[5z ڬɜVb6mYG-O-5u'#jmt95I ,jBS ccQ,˲iIZZm] FUE h{#5(N$u͑ۇ(3^ͷ#/@+2aߍ*Gue@\rʺmpb9Ul5ђAK*l-PeG}QKgUnnѶ[1W|)l<Ӣ8o/ۻ/nەnhQfX[ɼ=a'>3( KkpKui{cѢiWӽ{Wmp7 2;OLӱ pꎭnKTԶ jXX,SJKtޑJnHzC6za,"X^o~ }'E0ַR@L WF$5ci5ٲthw}{^)"aq׉$<"p; $nK+P"9%_'TRT_ΚIҫ;Sꈳ3j+^)ظ C[VLJEx!cc ]d&/WwB5bmfq"dHP) L*Tٌ9^f_0,!ذ#P"r{G Ji P:;LwiD֡daUNڜy32gl&L90uHH[ϼqxZH=[|f}RSk'KԔH1r9TuJ7yR Jjxyc)UFJ4pe9U%(N{ iN?UV%\"JiY&.,M)-JkiWySFs@ `=Ou#~Kl{5u΄B`wUS-v-^W>YN}]-Lf([Z\ D)nk?bZ,H,ɾ[15Ԕ8GڌVn[&NN { n+Km'gM=otTIymwn+#a[.Aei^fB`(d2aK@*m|^Q27M2,EFyaX)_bͨYE +h4T4mqDU)W|b9 r0UE\vFG`{DH{L2̼2 %ӒGh" \HF'Md9-:[f 4%GB |n[JB>+͑-0-`:wŬ(hHG(VqU6򁃗p<^%OoʔŖ2eNp\᫄#wD/d LLQKLh\z !3(!. ݐ!Nݻ%Q]^o\?.y'*Yշߟ)XN:ւU?J`(XHn֮϶q>}U- on??[DZ 0Ā?(_1GN&?`櫌 =/0:`[\pm/$rp#xcƥfS杆"TtO()wߊ~qbc̑>%:%:f&V맣c.ӰdZ1sIG|CcTj)&3NT}':Ҝ)xމё{zud2Jq'=S{5=1r b]'HR@v)Atj FTB 4ނjɃ75-vxFL"  u3t:B$dbVPEazA-;\) 'kpU+ѾishptN6}oGz \.-z1:$_^YY]dOTA^͠:f e!܏fP9 [zW?Ϫ/g7]$O88_wJ*!D[ڌlY=wW_M3hhH8;Fw5ӏmO搖㲯:H^6-n,x^}J> k"v}t_v"C)51zӮNn+\WȞ2`eQLX„D$1-yUQkdsyek]>ߠLCat-w_~OvNpvAGh9DZ0Q;̀(?Du4s "A47Rkf(<$]}ATms KY0~4Ѽjx[ώP,$*TzUQI}L%UQN*jטnmՏ go;ЮWk#2Ǯ$RgZKyi)Fgߚhj7g?\8xZdEޞO2y}sqA\doQs.W|]̥cuVi4bl]5>}[zU4VQ4d7şoh^9I~M|M{{]3-V?'Pr''e'6KR$ g msy\|T ŏ ՠaqQC9"D3Θ㢳ihݩ=O Qh4WVTa *M xP&xO%ѪХRV!sVj.sWL̳4:GZ(,ˊeU*7ciR5V ilT[{ۏ6:.>k'cWLϊؗM+~Bq3/Yj"X;wweE_n" I393!jOKE>}8 8ӷϾp J:?[ޔH{9="DN6>qM[cmdHgrI5EfQa'΍~Wo[(vXGNpAb }H%BsqbwDOq-ΥssBDC{)>zL*%춇q6%hF5!tµu9NQdE{J8=j 7:YEM~/YFT<=3 赋T0Sj(A2(=21r~)ऑ#/`F/-1z( ? Prת4dQ׏)e ^ fyR$sbILBQ*ӌ )*-ʌRey+c *|EҩFr ъ{@}1X=9䷿1GbZhd&@a}! Q CsE >IY&[  o4苙3hBz5iBzki)uxz_:DTb"+6f0ҕr#Eٰ)qqHH,%K(xzqN"A HTE$is\ɉ}Ioz+ Hv0|$Qǰ#2,7地x+0 l0%T-i%//N9HAOe8 Lͫ  85^q ̑ql8u3Y!B$G5&8Ü c;'Ggț < \Y^r%'<ȈĂjlxrrAzѿۙ.Bd  3U c\P ]h,8$@Ӭ@I8Գ^3 :,'B a)3))B ,R3NY&8+ a ,iH^ nメ Md|0PCP;`2rqR0FbZQW>Ӟ0iiO [XXD gIxF¨L^|03ڨݑD8 }R_JzL1f+-0W QvoyUL6)Wq 4REgRBCJߩcsw;Ԓhܳ#h5c^<% WVC#$}QcTIk}=*H2Fi4d%ӔRTR%;m?W^c%hnFJ$X!}+ w|$ACڛ:֡|; pP":ҩL@TN.Tt~ =1cH5o5맣.5R XF8(?@,BԈrr"}+\ǽ+v}Wܻ!Ol=ƘT}PǽNܤ$'ĭG'8Nl]Ngw>zYX/nYƠnnn??[/w6|Lp ݬ&}VO]Q|uVhUVbJI}+f+=m+VZkGZ)OUřJ5ʼ̌LaA W Ie׶5;קXOblrWj5Izet'RtlS:ID51єyYVZSqM։!zzutPϊckSo/P5k\O`Yw'Ntl {*٣'S O"UP- =4fRPH:jDmV=MPh/m+%nH-0[aݠΩuq568e.ONQKt@ X׃(lkknVEs(~qjwTd'9IvR.mM4FLR۠$Y@RX~D 4k4Fw2-¤ h0z y75)hEx$8eXD Nl% DQsGGqE|01z,QPJ)kb|@( RqXD 6?e)g;P\X0fURZp4uD!W˦: Uw9ӈ@(`˟vyFXNRGNz׊F#s<0C~<a]çO>^5U3a{%|׌9EYUbqU"]i:My*WuK:˥xuM4T ꢲM`s6EZTó:YSM/>ҪRU+rYS:=Cxkzѫ5D05tt69uqRy͐^v1c\4EtsDjb՞X*( 5l@[@-$d/3PI,0+}>2S[RذL$6C47+3%`*L ƑB *l9B 歲jdQ$FWN NLijcb[[,V<(95ωP_4oK7 L_/.0)dT煮ݞ!p"&\Jw"]ݞfLbT*}.0sj'Vhm1Ի/睓;'we\LK)bJiL4Lj>E6e)l9\*D ay*4%tA:?+PF]wt0]K !3 ĕ|Eo _M>;[(O;ߐ)аiR_]O^Q,E(}GP=|IX#Drpwi94:#<.W˧2 + QM9bLd=9ֲ:ǭCas[tr֪nrZVƳD,IT"þYdԒV'kds4k^/xũrR.&Q.hpS9ރmb{ph< ׷ z(Wq@0hߑ(];&?5E$X$_ӵ<շTs٥j [MJdzkZ͗9­}o8lM;xKCD#$b"-R &< KMLjW>-\-x`{5郙>5>{bqi+zi_fc^o8.)Vԗm`T.?zIx}G9GSI!%m1-U))LCq<EqaS"SCEIY [֗|+|yY]*:]@'ģ3 =}|d7N)}<Jl@0/ê '-E7]VTtS㢝s_ Ts4مe-ל(/k')m TrMCbLA$&}磉p6܉S?;h8@]jt6~ ޓa6,\Mx`"+{I޻#+՜8Uą2jE.Xc%1XjM2#9˨{& 2::;"{/i'.(g>ᨻRT,u)*,$Gmlta%(T}Zd=Ëq¥cPI<=X@){+n33v" Rj, [F jT,X`K2 !E`rxsuw)-N!1`A8MQcWL1&cJ6_0!KQIA[.lJ՞v%H+dx;U@>Brs%t_PtO,OF[S{u0r=/t/U( u I{6|f0}>ea _mY &煕 ݩe?nxĪ=~>|.RQQl>Llh6Ϧ7Ϙf}Y@&"kI{xn0W?7TԌ䲔{YL3JtL$EJS)2NX,IJa+=j،Hb3շTSY~ٌT䰊 (Ip jܟ|cVN8sGԢƋ7/*bd6tV6;y~am&$0{mw]ӻɤw%Q~_ ra;fӷqJu_> ߑn6kޭ% `y=Z4 e$'Eb%Xu\2N&P -h/=;=QMFM!* J(6CTcf"9Ƙ<0Z)`Bܺݘ@3zM羓ي,P䥟}XLVs TE ~< K3^3CM5Gloіܨue1P}%L㩖K}*vjQξFOayrnE t:<.&d Y)`+tw4H? xqd w=ǷvZr~RVr ~kvӔv 偏QELKQ`Mkx[OVr )%nBy#:}TnEd-vCB^nȔg! NzLPx-fs)~s%`rzkVbQZs^s'{/QY!w䌬ZgJHz-4z, ܻ,Z(C{yQӯ}nJ~US6yj?:xsTy$j J?Š5lw猅B&vP'37֍q?thN=c*$DschV@Oƪ0çO?Ћ{8_cKuԫy"ozx_8Qυ͙W uVhr5wDu܊^P*e{4`ӭ+ Jid ׇ7L賒gkc<=ìTjrwZ9bF v8Yx>mbzuV]XX/^K5.rzpKBJpdyUt% o7hWt -0a|g Ov>;d4Wk:wU ⬄sf^ p׀V)d;.|i¤<V;qݕ˒d5 e}kF񲘅AѡGCsYj cHJ+1~GJ@RPQd.{g?w*MA5CuY*G{qn.v|$XҺ!m2|GY,\J~aKόpMH>^) c|Lŭ?o?5Ut |O(f_";'wNܕ`B2"%(<6r 'CA6*ax j4ͬUimg7wE5sp׫bvtYfmdvb"n- Ts [G\V}a һT󋢄eR%Q#3%H&}Lb)I•&J3*bbF`-8K4rR /%4t~jabW>w.m|s1ӿ.2ro"xxt'M: -]l0ʥQ~zDqo iODf{vΰ-l8v)B9.4"Jkѝo\pwW01D\OQNJa d6H$Dcf,MRgZfH1EF)Ƣ TnQKm42OM_7_sMnMg𩻙}G<KƩ8XLL|zeΞeYD9,O=qfg4KwrkO-"ۈJzd:R(8o;*ljV|:3?)Ŭ'bBJ1(:酒iD$K4שBJ,NHd)CqdF3XN mSazX&j \t%vyh VGNuk" ֗vUFq 4AFFܔ:+D8kY 997H~L0*YˎNT7 u4HՅėL$2Y9DKW"X@πN4Xo~ru]c@+e>9>f>KV^ۋɢ?Qhp{ZeM*怬}]XOo;>Z~#q7k|Ob #g`M@NfTP- E ._i'}I-ޢ#K_9iޮ&T3#L$4b8i2#ѩDpqD¥dݻQvҲߋ2–SE9Хjgjwh6bkUWA8:CExԊGJ|SQ7޴㍑F܇IB&`.L+9'_K%YzdOhK܀gvUw_X>kOT u9f) J 3˽NyFqYVt xe+f7ވ0Ԍ'|X$lFp~Z9(4O`eKB#RtjJP½+87Av*΂3i{H%(2ܔ# "qta:}*;~*5DBp FϗOqq0*ո%sP7ެ[83PRÛ)K /ؿit.Hu7< hU~z_=( @)N p_N/$AA8pE Y/d~/!>>Ũ=c v1PX+قzY2rh'W)@'o*,7 cWFHov#Ӱ^%BFw~lCO a䤛9J "";0 JpPlf ' K S-0k։b05E 3=EL]1O(V\LU! ͥFX/(,Az A,c֪,4,-6^KhzwVOOۥvl)M睊oww=E5D,BϿz$S¦zB&;TŊ!o/./?M2p?W4PCVG {*q-34n;vԝ> F,`ϚF>%X)):f1WᲶނ# Fq^XBUU&/ӧrJ|]:hd=}Jpl69º/ E;Q1[2Ӆ6\)Ř%Wh|IMqjGJ}'A+`^7Q=%FYJ,1f9@G)-2wtF zL#`P:Kq3荁'"PIZˍ- "Wq_y7&,ZZZ 6FXg7`wlM~&T![!JPɴ;tqIKFb T yo8)I8U趠@%KX '(n, Ky&n T!@\BO,/V֣rPhWC44(Ҷj%8d/!!./vh,~F;ӣݜL뗣=%!?*Yj:2} Cjp>&nL %]~ݔ Y.`wMk^'iiuzұGw uGdo]TJBef$"ccya3 _عMPYUuuќb/jAP F$(zW1f(/cVň%V o-H*e'΅cTmR4Z0@IH66;N{vOGIm}/SӏK^ћSzlJNژh+܍eAE#>:t tR*f(dG:ԉ!g!L5hd3&vjSo$\C8WrܑD BaR}Q\yjA# n9rcݲS])y7#v0mR% WbmZkkWϧW'٧ˉ~y 9ӍjhO-֦FPp [ƪLۂ&m]f"QLoW-D\ !A۶\92Ak=B]C+>#ȄbAFdF Q@HCZ6=`|4WLz 7ꖆa3ư62k42 ^Z%ZUJ|~J.w](Bf5'q"<Y}I FRf_oM5)Eл֗}y|zќSDN{dPM$LdIy燺czE`qq-| yI5_.1%w7ުG2:E~)u!!\Dd[m&Ah\ bD'w6yy[wZn]H+,2w[-?L 2H:ы p'>@B^)%qLݾ ݅Azn'W/CZIړX.nx5q+ʁ?^3y\u<&}zf4b깑U ,ZVv?Jtp ҫLOyo;YJQٴߺ{p3hQL=(&Sg`-BV%B@p¤(է%?kP@eaF8ݣ%P r ?GaoxC[ ZjpvOJ\&3QJszTFǷG,"#&t{8daկyGpH.rO _zx=)Ғ+g&ѕOdtT9+|2 ž$OFhvsX E;/36Wp4E% yBJJU:u8-+PH+cD$uNkYJ@]VnATlkMWkMl{57ؠ5ՂօJCtUcUZpBr4pi=3( VUZi"TF>XMA~V z T%Hk ])(u*"cEŀk#+f=$g][ƒ+¼͞Pp,Nq'/f9X# Yo5 $R(EVW]U]GJ+M __kƧŧ<@|OQ5`|n=->*6MsH@ȀMţ 28Q ,Ӌ׿VFnn,ؘM iz]n r9DwJju6ȷ^ ?nSt=`W > 4i=FkRȎASMv( a:PȊB"ላg'B&4gd[4tVP ,'31]#^~dS[ֲ6t8sIDb'2B!T`0eZPWP+SH+a cwW#%wnC;(6Sdq82[4_3zҶ3퉕(V\U0"-R= xH$Ԓ؎H3\Jlbm'OQRbLd`[:;Sgs%$CeWs-$A桛|eBU)Ԛ\7-@M{,eP{$Z4Z ց(&]^>K$oI4) rc$|$#3buM9aCSjp鮐 dBUD zf,AL|s`r TBBh>AimÕc`$Kqm<3m;bkKALxG8N#kt A,w#:Gnq 8'YYJԖR.dH1r."fr`Q;"]*QRlU]h.xĮ)$6"$Wr`c\^?^>=~6]-ӧd+fL@1slMUdm#/5Ws..j 8% >>]oZ),z{zX&O ",Bi!y$P Gdw("݃rH,ݛ`эċHEd23cC |\j4 x|t _2M8uC%YNgzdfR*Di"Rvژ;䞼?qL3*ʼ>Uч- =hJzBk^4L>Apy0  G[-y\R\Hr} E,3W)/hI7pݺNMšgk|q /.wJ\D[v;IJA}/d v\f';IA5z'웮v'^r'vfXN#BC{=lofN mIE@=y3{=vZ*23ҭbh2[Bzzxihy[zTx!D5v_{=lTNU!`o֭4Hv{9Z)xBH[gwfqL(jĈhūYm5O?g]&!N:zD V z'8[Y{MMRAmARRy72!E$&\n;͝\  S/ Z6 5}$*sYYIk3Zj7DO/> 1#:rDb}訍f8)")5)-:b!T;+&蒲C;-WR:lHbІq,$Ƶe\˘P:\ck؈/6_?ʹҢoägؙAJF J!SKBDx; yz3xޔDEWsѓ=` ""@4g~ 5ZVhP0aJJ˃MP !ncXh2Z<}2$Rg#CNML8LYDQJϞGH˛oK?t#ݢJ)VrJ(7Yo8sskVi`sVf䳛`ʸTDEj5Vj|qL2*N#69Χe`ΗJ?%F5@4f(Ωdj?Tᰰݍc&g= Y܅d`#C¬wlkcω|8 GYsO$$'Hbsb1eBjZsO|2EY B 4S!vnd+d^~ۦ2Z*+Mo]{ ܟiQHIMC޸c%ȆʖZT`E"rr(Эm 86v7\!N*1'_OL=RyZy3gSa]KiL]nL&t"c8x7^=&1Rq>ퟣw3^Ւ vu=W"4hL'J|x;|></)>d3̿qyyntK> LhnK۞sKi:_M'mގI|xc;ws{9m&@n` wc 18TPPMą6XK?Ű7SyW#%w}.6S\sx]Wo \ 0T:b%7=FI|(2Z>0[:׌ [zc%rDC=)BxB\eDe:SJ0c`ŧl? |,M&jهd}(6A,/c?cbiZl~VŊ5O*у>\@%2s &V%ɉEm/&VͶ}{ڹsRD]F"h;@UOrNRajEW{ڝ>z{b0/J5lvfοfeO9+N e@et]$|5{iߚJݚB]Eb%-g=)H.O|SRMb,0 OĎKEwWf&-ιWS1qk=5a]9>Dp3CEwk 9pYUKH:,\|#KBǪ yػ2w2J! v܄Kj"B[`._8tc"Œ8D*i\,U=BvOU^|. :@S-[QGQ'}5M/ūT}=!u/I0NoGS[}? Z޶\GB/NDR\G^[S^^ LM(uYQ{7'M)ڮ(RU`$\ )nX )5Pj* R }n ne9euY+KIGwn1֤-΅m11W k^.1YO7/?ؖf1#`է=U<nD`^%;Q,gtl}26A8{!:c9D}ZQr/z$4vO^`*)8=G6AL0$k,Jʂ8+!~D[.[@j==[UI݌.kU0!u`Uf֧69m͑hq92U)axN}[!-Pi-JI M W᪒>sW̃QճCԛCtPEgGz;C{eL1EwڂɆϕO?2ZU-n~LroMJBz<ё## #MrFJ˰X I^l6wu۸KKGB3,Z ^ Ō!][s6+,٭ЃUgS[M$)geIKR)6uD"%V%}@h|ݠ0Dg-05HRiBrAfRQ 8_\D~(TN-A_c"j[PƯhj@)@[:bo{*@fR3J LV6:X\# 98˭SJ%%6Q+/'<_N\c$=*vj!`0fd[=W<7X25}ţ8W܃*esZӴ0ss`Ȟ+=Gx)ZR"8jAVYqU_Uʱg/QB9C,Y)Z س;̃B%?0>ASV,y|7/փ?<գUxO֜?Ř:\_1^QH/oӁװZbuꗆa0OU7<>x,L-P3؅E|3cJd3Rhľ01H++8iFp_(dW"ST9l H4 &O1qi:{S|`}Z=}g! z`>Na]%E5HJŽx5Xzl'yM28|66O*%{K)qS>p,! "pxnWi ^*5: L|Pw&q.iEHv.Iְ_;ʊ KWr`\b]68@&f<M'!G#Sf_EȖ6:Jl+Kxۯ+XSycĮB/\8=-WBSc]/]=)F_]Vp&gI:eT1B(癕B;|O'} G*Rw`3ST]Hyʅlɩ o|~ݍ4ڲP~=$֢߰.#גce ÔJ{B-?V1mD N $8i+*f^ye8r3[yk<˖XXk񖡷WY7ZhOc5T9\ݨl[aZ?1VAƢ>;eMj4֯K]c`^ngKA|(>3h?Јi@1碥^f:Exԯ=פ W^}OU[$釞*uwZ|ISkNf檭(b) *Όẃ}cS,j]Ō;5oI"R1} u11zTvT`hꕻá{ H+> kBQ`3!kٍk s>ba)Eh^cPp9 {}/Fe}fa$)Rk~gU)q~ .&rs U5kG;`ʑlA*Jz` ֆ%*qUmY4+aں 3V)#=I+y_8K%iE3kcQWXqѲij-*fMlBgUyg_}P UjK!<Ěy&O-Ugx`pKgV!Mےt!5#}zyn&%gyZPKAO*>? H+n80>XR*O.=ϳz`Iے!p1HypYzZۍDf۬U֪kLDʯܹ`ۯC(*<<{}lߊUpp6cM]?#~k 눤"2[>hEd!UUH۟4*A+srS҉<8uňVJ}E{_[jw ߕ%;&`.{:' θbp<M_ zu?SZ ̜?az}6zpaOG)sIuY4Ti?64HhgHMOWF}a//[$9rTɊ00LeaW:U ]kj+O[aTpO>}ȉe2G 3oe)nbe:ba b{ELܖ9zX9b/޻ k fٰ֒|H'7&)unP2D_v_O=n2utS.2] ~W^<CJeQXiS:BCsb5/(/␅-,D2DKcM~) Mf<\҃4 8MvM' C 6qe:k3~pF?=We1jeI Ӗ4̥ajDZ8l+x}+T<|#WF%\no?Rc%R24C Knice[WhuЉ(S ҏgk$fӥ`f7+I'L QA] hf>jL|RjdSkvm6#I%Ԁz! ̟jFN*0{AS.22j`IRp&`m|fp 2GY`CV1 (i]xb t1A<;2]΋]r]fH7xtXO&i :l֟}pF>nPVa:ԣ ^3ʪ%<_zѳsJVy7hỷFi}yz*@^b9ye$ 2GG{YPD,[zTpVI++x0 T0Ѭxk.PSgkK_RK,ﳹATLjU`ݙ!RDs9ME_aZc",{RdSi'jC V\~Y"x;ݭ3L[Zae SKCN|6*K 9>m\Se4)n[]6NGq4r\[8A&kR[\pt[yǬ3?T״txxt;֡NklzzE8ڽruSuz!4Y60=TK%ET +(H =n:mF{DtP+KWAoGxm6+:ocRr[Ŝ!AVYzwk] 8m-ƾӾ.q!. lB\tE#g%'-kzީqp}k5 OuqQ"_ v{oZWhߺOw-lUJữW/ >!߫m)9"7?9]}INB"x`/9I=]:YY 4Of7QT&'/|-/||L>F#pz90xjW |'ɪ2oaX\]כ]qW` -? P2xBr/\;\lE[JTs jo>,;QJ)8B]CZP,5 ~p8h5z7UPWDeXV:n4% Rk ^@Xsr%JęnO"˥IJ%jEՃAbR< u_ަD^'J7@Dm/JIi`|_J^4E8gk4N#c,[y~qʪ7͖DZUڷJ||ĺ``~VїGl] =b`zT,n5ڹ}o͆= Z66PӍsٌ%}ِ<][^ma ,9Sͳ~K\9$Phd*&&'=C [;f?8O{":F$Xw.իҦc_ LUoU8j?ev'h"^f򩽥SQۛJݸtryh|ㅭ㩡ݤ8,Ȑ #a[wAfL\}\)u8we;a+Y>F0B}R|X(wxS(7(Wep4w+`dL%ߟޕIʤwej {Hn(20Εr7E[!sMmPf)Ո"O~) M2 myMy 'uIN$a3>x'КɆL\[֨Z{P[u;mIJ3\6fHÖHQ7v9X2_W5(+PaAFS u 0Ѫnfk;e| xy*ipl,f,ːa uF2wQhBfqc]n!k!'lSB^tP 2>^ 5۫Ku+OS!%ٓ⪝~tvºL-%;=/ cX-YD+C+w0hx=E _bp:JweMnI02C43),"GDlrnYIq0a;,uDeWG8w6ٔP%N0{+(7+)|u ǹWQ L/7-ܩ[, %׽ DIue^ZRq/4K)]isw3& [xhPH9)բa֒a}ˆi]?$4쥞ai|c&C bN |MM\XK?aSYTL;`g\%Sd#ӌշf\3ߏV'4 wx~$klf ~<Ï~ކ$_}1c8m1o#>C$Zփm8xO>Y'TR`QL{ #Ȧ|ky{jwbׂˠRd?>*)ߦ?-Fcwl nUz7/ov=/prkZck I7h\[63}܊X x×a(ҙf[4]lNi,l4*0w^]'}pK9a#2 ҏ559H.>8:}mnJ08 eBEsci*4 AQZj"HG+Hp_)Mo ?p|bʹ aI>9pC?&(̏tR:Vq8MQBFSb/nmv4(Rb D:tIߌ\~nAPŬ ʺyyoQ! cN"Y'#K\7wu5 ݻޱaye<[:Q0F==[g{=E"Nť*S3PUHkt &ӧŁͅݏC9"IB #)tbsH(  pQ(|$K +(Wس&>B0&(+\W."! SDض:g&+4 ҈ik ~ReHXyD2-5i*o Pw-YaA,BپDLT'Rd-Vi4Hs $њ٥aT"o5VQ5瓭)2|i 2+I hW¡M5WBe9Ԕ8n׌{9R :}:IbўD)/u}3#BD{ZŇ^7 Cyʛ`e_aGS {<ȍk|~%s%5Y%xijG?Of6{Dm GF藟_Gdz\;>MyQ7%\)ofw@Lhs}&cM59x+lhfOO.;oBJ qK7y)ć* a@?,&Jxܐ0^YPdkKkږ%X^G q{oǙ^U)X\^|t" ރ@'0\"8xd#&Gf1}sdOKHζPq#+z脄ߨ?dR{B(bcXl" ?og6)0Z/=}_w1:r-IF!h7?jCwC<=e: SΚN|{݌OB7ӧ6/$՟>C0i8?:dCtÕLIg<x+1Y0a^ c> <6Pδd> Dzd!_$2б6 $aпik V5S^?~F ijk YML ~~4F6Fg#ov!M0J4(TER:g`/p %0!xTR01h/;w _{1J[=ǓhW:ЋlǧՈ!v6hp1sg/aXj/a $0ߍGɟw[Ӿ?=|47RkgS-QIޣ:RKoЅ[Ԏ6π"uHĢwp{X]cu{kP]kbvbsf9FT78k'.futxhes8X.s:ԺnN;^ܖaAQ'yiF'&0vT9<=~i Cc/4xs445++PNUgyn*@q[93Œ 9"( >q[8=\XHf!1e"BT7iŒ+WR ˎ-Y{j2%֨TqrJ[("UF.!N^|V@3,B"% #Nq}1ED(BKdbE*cX3tv,!+P-"O':qΟ^׋0_Mh2.hNvëW٦>0 N۶4[ѽ39 ?;`l~ffi2]<,&ſ|;:j50!ڑ %3`5V`vG+^%$Tr J5@TN%TJ{ta Ҫ;PPT_gf,uhT'-'ʴCM+ʗh5Ų.tSu2rn<)#I7:ҭrKҚ$dƕ)H^p| ;APvðD6KOM^F/m VL yD LC.="c1D5($7f^υ=s3mPt@ElXP>i־1&0)vA%>c!ZĢ\ )c ɔ8P9%g mg:!ŵz@=NjuBX>#$DJp$S'Xv+BeiT, TMAVu+Y1D 5K$sS9 8Vz _,\F1Q@$ =T㤑 !:*(@3P8ڷ%Eu?*"dchpq!oD7z;^m:)9Zn0Z$Ai)gٻ]qv`̑dfKgk !Ql NIBAbq"J#_JKJQ,o }J/u,iG;}2LNkK?4#fRϕWyn堤ˋ=-ҹMvGQNgTi|^W^ǵZH$NED*=-5CFr#[!M`K4m(zJ38wx,ε a^ׯg[%ozi G?Zg1ADx?{Wȍ/ 0`w fr|I`MrĖ d-vRKjYlu,x`-]|XY/z_{Y/ Ǔ\ͼ\GBO;wf2f-;sF#>XE[ ^to/ l4}~9L7- 'n=aIt:E> y?Z(UgCfQF R07^_т^^dLJFa/&`ק?'?&7O~MOC?syY5e8ֻ51 PВ1LT89诋uWZMtћV-j_+MoY[|҂m)I.'"qo0~z~a k7p;g翙О}e&:'0T˗Mf8OT8\&\\PXP>}קS]1e8bR߫Oa{[qԐD^"K,Lj3Lg r턉$p7}(K՝j1 zI&r%; ##9 Պ䣄ugUIgvUiI:DVa"ۄGzg*B@9ΛçQUKx;xG|Y*9ki9c4Zvу޻xˈϾwv~0w3$Q]skQQسb u)|[ٓxpO}كfOe_WdĄwnXβ?Ǥbrk"t{!i)')iM7j&0-ymgQu—r2<ֈR&f(<ʄ1h#Ȁ!'>8aR2e8kC'b 3P Z9`l!C }LahK#agѸ5mUoXyp5{`', 6"O*ji"W=>a`3:aFbΨ'`\CL10!D28aVܩKPh߳_҈R)K ^bבȮaG1ꗈ7i|.)8!Mwm%[C~FSC$Qo tVܰU.WG:_*wxD h yWpȝŜwmAENNQww'=w! ~XuOtVJ?Z"jrLZUʲ η <͸:̌۸& fX[6^N؟j8ڹGJyʖn̶oѼ?P\XC U#|*+G JE^]u~H%E.mϊڳ7 ={g<ߞXs"OK~߂i{y_/s|Zxˏ8}]5_-',>}+'#!6)]x"ƇcwvzM0_ +bBe "mSla 7dž(S4U)3?/i)kNmӆ|"H>}]9) z ^f'^ ү6&7b ڒcW\I]]vEԘr:q~ɘ^+ ܳB߯F:*gR B BpB<4ﬔiX9+gZ \B{d28_ ^#VM0~Q嬂KV=_t 'ӐV:z}kLZd ;h/M~ki)%wVկ5@~w[}eR+-V&򐸯$F}iGR`M֬&]Ldm)%4\<)r8˱MƮt JȋziIx/I1b&WXϘtzZ̖g/jW6YeW;a0z@)Hj =8O'ps%Ix[)1c$9ܘ9$9$Y68yy]?HtgvB]Jy.}Q/ wպ5 ƎJPڟYPu8͟q\@Qg?:3+Whw=1]1ko {~?/9K m9kh;e_6v~뱀 )Zfs3-|}Mu6.mC[2-ZusQyRҘD1YNew|څ3NCAx J) p(yP #K'eEzs:| qU/*9{ݣ쏛pFS;\Wܼ}]@ztK1;5zt$ś?Ui}'9'g)?=.wdͼ8w`Fp}$6d!L{_QLЀQ%uJmM8Vܔ1$ -=R.H˝Λ<;:}BBs/l>gg`oMe GΏg ojtq\7߳ Z)Mҁ{+sa 1Iipj" r S3m@YkIUȲ zjP}N5lrǧ1b:PO^ǣʽ~(kM~>>«S?Zj1R#d Dre~Ln##6?!=~qBy]I?Ow7 ôvU=&ȨǾO%}f+ Gv\|rΟfOR@J9vflt6 'E„j8tɺL)R@N=GktrJb,Ͻo!sU#?_^E~0f~oAx>~ nbwݔ𡯂✂ U&3,TPPfĊ:FX1C Tnpb動C㽦+1j0y_1t&,Qـa<XNCPϯvE}ƴS}G53. ZC(OD08%\!C`Lƈ`n,x7%/LL:8&S7T 9E:Y Xn2IB0{a-4|$ ?6V%XuAq-.%EPQނp9Ҿ 5Ip߭}MlW 96F9I@<ҚR3<5F[" eԐ ^amh`jHzܦqf8cLXlfZꥡ8.P: ydcVSP#_VGLù׭׎F2g y*I˻Hm0$3JǯYxT0JȬS;CY$ ƸeMe)s51m=i &0lMp5ϰ#2g!$4S)rK)H XBP2$3.HtT)-C+ľ |IBU N@<˼RJcwBF1Aqdp"E~"9do(bsCq;zh%i,L+5Eb^(: 7j%c9/d-?F[jsѭ)8Ahoޛ=4%A@>ᱦv2v] Y ؔĽ TTiYJ5hZ@L \"%cvX4Y4|&))0I\5fDlF L+͖kM85žl[ R`̷,@u\ߞs<+iPĀ14( E!Bp-9. p䐮mP୸)X83)_XɃH\UTR!q &ZH\ D9RupERYJ:ǕJlu\FX'#(r^ۓY[uSp0#A";ǫm/8R"\)ptf3i3!57dҀRri\|C#R؀mu :n˭=DI^ar fT; JwR*JI @+qS O;J܎)\%vtqUtE Ǜ292~֖7R .?4?(C8lejҮQRLlr7NrrDIs+mDpK9?G=.g.6]bA>G@xNh-5NJQ-W%[ P^ģBT Vu%([WӻJ/i2 >7QuMFӀ^hlI@dol4![ Aobd8D k.L, 5PYD,ZXF \0p%C&wFsT_oP(y=bxG 9JR2^ya{4P|[Xd\DžyS MZXhk{Ԃl9))$x\ G{y`.-[qrn;P. U^|$emMIx'i6gqo|󪌓s``&/ 4}0cwnf󻇏 ,}4!ā챷v|֌|6]1u4a6 ,k08Ң}&T)LHJo 9T}&D&xE#Om*QUKEn*ބSQs]Y}z{zm򌧊I)(NJ^#9A^[ʕu؎[`#l,ŽGTu/ߑ$ox1\bpgyCQ($y-I1}@)pСh /*ʪ* Yf/ȍ2Wyw>t`v<9oR[ٷ"6vI%;7eU?\`N1MI.L*`yD|1x6&GHQU)!PA$ˎD>jD_Z=j]<|X@iS(|-,1iSˬ Ne|g77<՛M/CYQ0I@fD<*fk]?fa*gǢ".mF]쟵j.'uH}o'ad`&UyF9O)̤gSuD]- un2_VD9pXN7JÜ`Նh"Qltԁp=S41ՑQR|5Xy|NUR"ͦqm4XEK^,=ᘟnA/`egNjY,CyFUS=/icQR$sBaD0?|,s `,Kr g4uKLk ӕOo*7g4^ZX:w7.` eMY Ԇ3G7?M:/m@\4L9m+I#V&aɡ<~xFوǔmJ7t٠9헺D&ؚ>VAXˆp{.Q.xUQt|NX3N4q*@MYZ Ybo,2cLNg lj?^,sy }nv}'ro~黋{pţ}f٠w˿?_/2U>eo >./п/z׳Xj<Gm>R.z`y!23icPA_<QD`65%΢a \PHp,Pt, vD  #gaQ,2аGKVUA sĞ&o4ޏ `X:(IeERnN.I"Akr}xa4:i0cFSҹ}t6_~8^rهW+/~\yଟ~]~gA@%S|5;( nD@6`!EQP CǘP/ 7E P R2"z34އY]cvuu%Xiiר!:&C iq-BY"d "q) "!r:F ɋ(Qړ_+EQrLD ;gt 8dI0`Ŝ cq(L>q¸̂HI^5n]ޏ&Rnt>]m7/- :O(h&f ݻdFKgȯ?5XFm/D=Ba9ťϰ3 @I6PgRQǖvj䷿{ޞ8F3( `L|M0A&"99|wW`*V̴bA|v?^07뤎70| ۹.ӻSt=xਫ}(~zOT^'gۭē yp:„0*#LBgWl8Spρ0']`; sjxԄ0 KTJ\Uk"ҊT*Ս (j$aܕWc6[YW4Pr d9AAiS٪'gfW/խ @xp>N]*&+<3R%nxSzO{^CoofxwR@j2ںvm=< !q{AWek)ieks] {U[C\RFMscqk!C͝"Ҭay z>~RzlOMW =r݆Oƽ$Jڙ߲= ${Ap~ J_CJ0f+}{YOQ͉Έ>Cc$J_r\m_Vڃ#g(=Чh!4ir悋ҋrAi*>TEP^xs o\"苀uɃ[K]0ZDj܏+;r,Mm R+R2-.QVt219z@y*Qxn?VAR{6uޒzR' 9 U~srʪh VA8Y_/].DTG?%-n'bX^?_-G~_J3G7O4yh:Mm_/&e'',d[+7GTIQqxU[T\Rq-bSsgթ}vN8 }Ft|Fnѭ y&ZȦj[)ϗӤm9 Z)'FWٍ%&hG ܥE ֩3ٻ-1u_z>|]ȍz I9B(CSOB 4 rOfҺœ*mLВƼLaX"z­ʈЧ;):z㨬$nVN" G\%n66âTMK͡%L>?C3.\l}]YC)noSɬk^ 灷쬵11?fMClƥS9p#jY}A>:Iܣ_Gq:LZ+B-v|&ܳhaܙ@ 6'_!e)}Ugɇ56 .k|q2ÌQ0¶J󶛏=qPy麫L<3Q3oqޯxn9ȯ0ؐj̙ޯxxiUGӿQ1"L;± ݱ4΂Vs}6uK+BYOF4,UPo)y|XOYz?rR sZpS:o5*a2; 3Tt wȜ2^SrI,.Lۇ8Uv\s+e')Sac.$7Kӛ'¯y7t/ҹ&]tuco-a3;գf|m/ϐ:0Q(‘ &ؠ#b8!"cb 㘇1t:71ʋӢUh!ҔH9iÐ2\r@ #ʢR͍0Z2J<պp2}#=mC@aKh"@Ymy "ՔdcѴ!c <.1E1G$F 7D\hht&H;bf,Qk`[pXL=D.gFQ_#ObF+K VjRpJU*Ɠ ؄bfR&t8e bHDC(nbF1 1JZڔ5k [{gn&wt"7A|$ Rh`F:pD 8QZt\!#pv3b.8q;JRtI_e2͖$"&/tyC{~L[{] |LD&nfwm} $I{3^߼z˝M'J/ÛX"غkMfRX D'-:PȊHFHD" 7>iHndRr8mPYL8&VQxфcPgCʬ& ) ƚH3-NS+:%>~BYf9gb~й!HmTQ3=RMԩ (Nmʱ jxRB7o=Dvo.:{iבyRfv#n9_.5fdJEg '1^|r-aeXu'~v[gmzÐU1RUL*5l-A=V{G^Yxp!?>[_ou_gV {ks9tCd\M?Om*PhcCQ'ٷF*SNJE L*aS >"\S# (PbNS>  N:'^-pRxPlC6}Ni<UÒIAf4i;?L y>aVퟍDTH>Vjc500uZN,M,rͬ4۳0+5DXˇچnZ`)~ _2d[pfqh$l։B\Y9kb{ >Ww rEcighCpttGagHù-:R*0AyËcAY|adCyNe5>ġ[c[fbXCֽ3CPDy۽&īű7tP+.4<'fgTlxE^twq,]MwHvμS=ĉ\ OƑͮuU~x(|Xաo&y͛M^a>i>k؏c|i^R6,YoR{*;YPc'V:S; ݆{ْ1Ke^KfayÌA`#PQ WQ ;Pw)iDRlBu}HՀ'6|ƀWhGL 1e .tӅviWO1KMd!% '.:z`F^PDH݋n}đtIgF eKQY4o4lY.>8x1Ϸuo9":g9R>rK73z#%:$V3j(MS #gz~ZKK;(}M"MmP{](}$" h6I[3YiC-Լ=3l]!fDnv910qKlZW6O]QS7/ͻ~L6¶/nJ*P lX^^}e;,MVx:^6ÿt}]g֌ȯ`ET HXwqU4B&D!s:}vIܪ+X!IS,(miǫ7^Άɯ`gW^fljt+X Y0}9؈ ȯ`VCZ 37VqUެ8*^w,LRX kz(B5%_RyD n  n<1c5a,|H3&b8j3Q8s.ُgE) x -\X.t%h\_Xkބ QU/3T?5Un N0Ǚ·2j¾`gǮnẂ6Y{2X!]U vSyB~/i*;r= 1lFH.wYmk+GDW``Fw4MQ*zb/Xszha[|^d1bw|N- ^JXU*{迓 2D:Oo%tJhg;P%vuNBk4+iW~ V~Nڀ#;ham%>^&U|?ͪ+|*=~uΉ~V\W r"e:! )ڗ?8~Ʃ(Aݒ~&ƈr2_cЦ zIiD^VT"OQ`2e *`9&T3-5yɨJ#\JF< `DCl(sG`SQZ<F]e&aG5X8Ju<,^ =,P$6pdư A$@<8\BZkR 3p 0+LOFRFsЈ$ ẔNil6ƈ1V}9gdh$IM^_N@bq_?rB0+8kBf2ZdAq ) ǔ@xA/2j3F:j@5,'#%QJ-`4 \y%B2J4FH]p Nˇ ^лP)LEX'\ה!V)I}|Xs0 aoww~7G~mrZ>|^*)B>>C޼~x~Y"j^x@g?ov" D0ٷ_]xXnd2~4I_=ua~Tgb41iv5ł Nv20X.FwpN4`*pI fmnVujIxe}ait@%0)?!yXܬ̪۴v4<<~Շh'wiZ6I.jrI*N|kۺ`xWi6^ހ4NkT d?AӠ g!%DDP=sd>.Gl%DzI"spetLCwn3)"(tq[ 뚢&(K^2Rx5p"Iՠ }* `Z$GVKd}U<<, 0^ 89XPfT;`pLp%`dWa(m #1={Lga'(,̩$ذ0ye tD /OH;e,^ rQ?$L X#drI*Ee8yKQ;ahaFraހ@3'94 ^ g`id r {3L=Ip*D9vXMzZƲ^VcYg_7]U՝ר [“ױݍ/9+Ms`e]}x[Ouφ͊D$~9tr9[`y/巊1zޙ2˛+0Eӫҳ_.yp6FK7t;?o΢/nYL2@.a&(uGOQJ5dw t0 pp \ဍ f r Ih Vi4rݱIՎDcYG4 }10?Tr97 ZDuQ-W96UVm"+KT UǪnc5ދK"T1CbE-eW'7X.fW0 J^3K)`N1Dn}5 ` `Iq(jDOۧ k a b[a@j.[>s"nR ~ޚa:#Rx3*)"hla9mu*/+{/+[n5@d OW,dS:G-^De^WvOn~|XMq\;y>wɶ 8T"[0tgj۸_a]nxwjvRT~qJ`,]hQg[ )qDrD`0olI zn4~ l]>#G ohdB8ťSd}JWDAJ/: ={ (eu AzpЌ`5󡦷QPqAa߅D@ŁUq8نݍy1{2@ )u7\*fn7**P'>%HO򞂩fieD[$qH+ -uEsĶBil;,"pIV}o_CęO"L, nwmn H B'G p"h󶉥r|\#*m5ZRwSAՠ4Wv BSAͰc'*oLȱB+YP2c%rYUa¾uhά p\R[3\9&)ssAg'  V1XcCa5b,8 ÈSʭZf0Zʝqn!˛1kLή/3 aPX \ѦX?N,IEzR3C z-$$h?_] ?mK=-]:?yZM@\R:gyW[]֭A<СX5BJh*x RC/8_هNE?&{WAqCC S;+r濗A=2Rf_t6W0/8$㜖ջ>Xuдn)-(V[?T\ ;hKx-ꉜ]N.c~}ȗ52-A/>fqfK[G^V2v?ګګګ\4g蜂BfH  δ)BQ$p;從ђB2&9.Fah!TޙZ}XɎ(n\9a<)Y,c7vH$nۇU'SEӨ, %Alމ*<@ \ʈLa:^Y Xe T}v2tmF0zo8׸jVf_Զ|Ȍ{r;%A.c﯊"ńRRH^0J#AdNHa&F1 j L ӹS GKq%8tR [N0vsDvA 0Ws#s!(A)X+uҔ˿e +2Lw(ރ!b*gKO6El2=fp~!,ˤC-5KBEe.yam\GzE)O . = ~C=>\ڹAQ* ISVc4DuH)QL)s)J Elqn*xMs6˭E:nmhepggbh;=k*|.齄];| )#xlrQuo]8l ҙhE&$T dBs&vFXo„ZV1)~V\dM!nB,!P¤Lğq*Bz_T8JK$!4Q΍&NR(2NRk,ǜdN ރ)n!e`fb.9$Aj:4FGɓ\hD')#'7*oZ8V~_jJM٧]qO]7RvQW|g^gOA/r}>[M OIUc2/'~n3}cl$b#/qNt<~?5[֫bz2zǔOvYRE! y&dS~5;n>٣wK tRLjn=F([&л5a!/DkGPS-iQ$4J3 ?T ԒDR8]):$Mև&6H634ɾf&Og8p-|~ Ts}N9]Ƹ@j_1`Ik0>}?\GpˣuMp Aw AhvYON.Ie~T!DcR(f9lfT;.dnZ}* LB ^V⠼ t8MCU|Kٔ+Tt?I݆Wu 5;&`p|_`')V( s›3u֢5?xkkksβB Ř\0'pkjqNOm:PR}@I#BR"@a03ĬHЅq[Qpa8%(B[_3nrk4@f!|0[QP $#)UNs&r.8QZ\p,TƬQr9DfhUyAk~̡b,FմVP 4GBR|ԉ<u>7W=bqPaē|uWw˹n> }3oŶWj wZQ߻,=EoLn?hCw cD00姷n1/[a5`}?#ެgoUΡ3"]<"߹8o>_`A'3D^l³o㑛 ܧꛌAYGlol[ H~꿎[5EGy1AhUV};6X@DUX1CÊ#DCBIgܭ8"/*v6<{nqdrt`d8l]赶.c%Sd]] {Fu3}v7fLEDݗ_]nnmZ3"w\ooGXM˜mGgvݞp 7>M6ssW˛ٿ-JZse~?ڻźJ/ !:q>MITB xA;T3K ٠"dʅ9,7JWLb:g!}Rm6wiB~]Ω揣ůꦏk?/ֱ?eټ zɸ7M~s[,{WmfulN,c"P6vњK@ l]>#nj @ *`B۞X:Knܾߪ nnƇkh9b›HT_jiR׶tW5v'ؑv . dcSh烹Pc[ }^Y!Ald#dsraת0.(FacދS~+R0f%xS$t\.21Bf"mU.y#ˤɍ@HJ@h.mNϤq2w16z)KR:{w A4{'AEQ{i?NSJ6`dn;l !'#t 0#.U#Sy*[B\ [0m[{ц7- (0@bH=$ Ҡm6, ,!@| *Gr%AwFqXc rEHq榟8 Jtf+[]>h>n{7f<\uKLyP)sLHq L 1 YZC{L.RoQ3_ '^Վy6Wwb4&e,bg*kk1NCB9&V+{G?j IհV&;\Z!NQ${SuzZQGVmbB1U2ZO2"{S꣗Ż'ԬN5L$Ghh `=>6r]uR2 #mV &1H-0tmߴ-@ !C$3~k2ɓR%{%Kyq|33KMN!kk U髢-$2L$爧 D(Piҗ21NDi}h!]pZL,s6rŎz4Kh,h5GJDRZ>bL~dHJ>cS *)&;<ێ' 1mՐ;y?H'28*7N *1p![h&8)*! bY'Dܻ0TPE΁Ӎe8)uNvh{{F >0zUscDPu< ưb6a~Zu<`ھ\Fz@Y"NBB)lWMͼzGtp=?&PYrp@@H,l_@_@|M )^aa 1 ”B2V.]g602/]`lخ]{c.[ Kt&+0ps#pS@2+(He&GrRJ.K ڴ羲jd3 3seH$X$A-J~oʒe{ mUUX,8ț*Kм ER%lAaurqY? (ZbɡسV:[Dp8&`jRJю$`0 J [ &Ai#,b"w4Z/k[`(g "cjEHF"1hȺsF_#9^JƗDC0!+X;q4X:gcoW/_nQSɛsU^g},ԋc`Y7~ZFJJ1iFE.c$a!RK`&$җtY Rc급1Sql 7 V<-Z!:(_{æ*=+x k~ziCbGHeWZ+i!OE/䞛Dz]만&G/Rx0 ]U".^7 C"Og0f3,vBsd0t&OPZ~0Ja7!/GTQ =ExDؘ-qx-l4Itk[e\uqwҵJQTIeZuAn+F!]sk"&RVb¬ŤOIߋ[z2Y] +zp B0j'Ƨ=ǯ♦ S&FP)m+d[Z%~ ;kbif<5'm>xik+*a"P\L*=ܘP]>,"%- m滯`<j]J{  DKh1 9͘" ߈IT55S 8d a+`7e? d|3%!S%4GȤ"U<TD')S`Vp7~#?ZaĚ"Lx_;xfoAKU:9Tj,ILb@6˜* (XSX蟖ămIwTS\Z|/'7*]PE8`5* +4y8嬧0xUOf O^KݟAަ QZ>yh]/6+εDzC7GKw+G!JD,f:RȈHF%[.~-̣C^oOPG26amgmBZpֆ730xG&{OڰZ;JkV2EV@%BU!`ZŭYŜyebVۨ[_)kHCDׯb@flKYa͹% Ͷ-Suf|U](5]Z$nB>2+걡T}6"\3v@q^ј;Tѱ5:hΜ7/Y?G*Up)ڇxA; Ƒ;f4CϜ}PbWq&TqH KS$4'ǫwOU(qERZ1(0"bLbtI]hoC^Đ&£x$"I.ً7IY?%;vwct'p?6CمB\LI_qbQ`"EՂpȀj&~`}&1+#h0F.BcisI$10-qB6T$$aYes8JIDR%$Kpp9A3-+K GvS}L5]{[JHX HtR` :H7l¢$2ZS%IZX:!Jws!zCN$4WDN$9вHyK R!L0AQac["X:hmF2qFc*U/ NZ3FTRmځ-Gb)T^U+7/K޸b?'uHto#1 j:)UP-A@;^O/9ShjoQ-;'%悋lz"(RģZՒ zq% YYXSgaآ ɜ<?o?#š fJ_ &"wh/;' 0O.SNԉfM_n=֘H!2+Ln 8<34/nE%w.+sV1n#B'qi-9Z !Q3]d=}'C)$ y9Î(Try;/'"z :e` Ռj^0~ca 1wop"Ļ^}0A$ GC? F>QuW/^ǏyX<9h =-ٹco(⏧97vb:q<ܷ=;zi4su'F`v|/i11c'o:)_@^g?`@3~n `ugt ߾_=~#\v;94翽f!{q_:__|=o(}VcYb|ƽ]wQ7nln<ק$irh0c7yϬ,9V/1&Lla 4AKĸ1-e_՜iEJz'uͷkG!qNi=sR3"HjZNgۙ?Խ;B4Gq-M^uZ)Wf vյY_u?fwݛ׷COJVtU;x7xPqB^k9y|.3q@眓y=< MΞtKV& l!zA7I}_b x O*xj8|5FW3'c!~'0zoaOFʷozvlKo}'7O-3x纋,ŜЕ7Y>F< cZFP14 {.L3K?O]/C(>|hlj=<̅l:ta(T9Z\#2LJhʸEaTŭDFJ("iyӘL! 2ΉpˉRla2'Fz4J߇m̨S3^Aw8ї0eHS92G^$"4˛x=ro1 _ h1*࿫{ c;+gt5WVD , qɦi1Ĝ"ɗڣܚ Ԣ)2 #;=9,;ZiUP<6oQ@˕@i˫ڗU96TKy)|S caRʊrޒ2&9ՠ:/[JR.:+&Ocg}wTk/f9K uSt^E:@WZ!Jc( ͈>~jDѹP숵kQW#VwZ\+suLb]KS%krv )7~M6Sk*Ze)> )M,b a RXrđu$5"u)8E$A 9J ?/P5}?G̦"ΕİXfGbcQ"i&7R=h,.zj ٫B@Nrb,D֑Bi"N&+jVsE\݀r޻<4W}wOIqCd4R] VEui5o8{JqJP0<ČtkNpM]㴵$= 劃 >ek7, ^ՠe4 w T ̄0!--{ >CS N2Ռr\6q0)yIH)aRS8@W$%oODJQ}L];Lt[' )b1&N',\q/4aF6(11viL1NRGܟj:h&кb>RtknX-,[d+dLJ &- ˷D1ɔ6u 'X]IG_Di3"VqUߒ 5Ph~ n<݌}&ŕvJ)0#(5çr^њrzf.SPq`m`sUNIwb3͏_<_W+,?\/GZ*;=~}ϣv@ |߿BMYkc@bi|Mty:ȁ4F0wid):YSʰШtb=V݉׆TJ Yt>?fJ~ ֽ*lbh;ΛQR0Vi2dpBRe|sECUb;gT #\RwQEw--s2r((;vxV we@FeCilcKC9.vD #lmȕ&~p&-14$IDppي0\ʛЀKx'=A \ :$Q6kXLBX+awm=4% mK>Ć#/H6<;3/׆!72HI=݇ֈů"Y1T^Bh(mLL)]\G2O T %88CȾ(RIg$JY-3|u֣C9ѫ䤚+3շ·ݎnb^b9RTGNbu\gOvV~X-W?oߌXV{3qu~Y2S v$doj](hT )S$(""]AA] 'oTڍuUlM1TlNfd ]lp]tȴU'oIrϵ )V KLwI÷Uc_E{9|c냙!Y+pK8[,bS 7Mm >W oRQOEiKBTe=w޵xZ$eAY8ە29EZ^QѮC7&\)@>106 m:6_t;6 ~l#d`B$>,6RKy˫Ңj#Vv86YB:n%!:dW?7}ؠS.b&@^Rhr` i7QI"Зk؎-.P ル(p2JwJdZJJI U+&Yu)N頷&I2B+RMcݬ4ŗL6pSS҆=<_k4@~ VwpVtI4К>6#vB`M+{"F ݜmM9PęRD =r?9??X~sNfI^2*DJLDF`x6 j)>Yq> 1$5eJA]RO$ךO4!"F%ə6Gy;ki=AU[s-*_9Ƌ_h$pM"m^< $ Nhg{, r5.s) \RB J~ `$Rroכ+ܠoFJ]J!9XlC7Mz- h$_1MittM\(ňLRp3S4RkM!V 0T#))"G$4ZPS<b0 r<^#!.4^>U סּs#^Q-2!G\zON,q5 . $24z R9ˌUV"€G#RLH.҅\KE7Jo}`%5}o\#u{W1b;OuԆR/2Pjy=|RKZ?A9" ${i1R kTIH[OgxT;lO&ՙ)gcU]88 E_)"aKg`HBh-\M!A@OKTgBWfL.ѽD^{5VPE9P1Y\R,HD) HKigy(;ynth8vjPQiL3Ehm$uNu (/ ‰e2K1uT^2%"\ܢ#2d^ۿ 񓚑rXD-'kM~Qn%Y7ee;*0 nMS\nƀ8"ñRT&.Rz(Ԡj?0Լ~lM5Hj9f)6{ٸNtq9}rW'%kB yQFr ]tLrDnq!?uo9j[8>>Yӱ 9\#OTsg !Dω~δ]S p!;LxVaԜoyyBw 6Ck2@j \p63]E\"&Ǧ py#&u1)pdFaX#М*9$If46@l[~>. T k 'b=B%k£RY!()". X%B*Ϩ)M7*/(d(,,s E VA5Ct3a@?L97 k6O݌PYDfW)򫊓{MW滆MFѳ(y崔!$cFRH#ii8g%TM%.S"]\Fw`6)UT)"\;T,3L`4+6:DYp!JEJRfbAe& 0rlܵnQ{jE. V}僻p槭q󛼺euyzWط%*X77)"O~ H2+:7W}6VϏc : ?ױ(roWww7ʷ2n}eHNc2e/u}l*ݛSS8~:=lnuo:v}VJb'|FS;(JxKz)W$`.U夵ey:9vBǣ_wmPPZX,LJ遲38C|udwO_z`1K҅~Z`:/~d;!%Յg:# fsI>gs֩Uɥ> Ōl3?0;}X#>t޼{=)m[{ΧH i/9mݿ#Q'9-0~af10K{{ !?vB 5:*)j 2!jS:ҡ}ӗʅv=~\(T[ɻ.Cʂ@jΫڻT*MI4jW^jD$Ŧ%,ƥ3eX/3 P'◝'B!?㉴şuVT/Ӟ&Ac G [U)w."5 N/i/"ջGRO7llX ΃Тw}?r6յ!cwxXop񊍚nG|X~](h(Ny-h ~4OzZsm!nYThhiTѧ.)j]8Om@,k\%Wn16%j~Z#FzX |L'6`euZgڻ'yzM4Ʀ*Ljػ ¯bc:mx!Wֹ޼[|6wKa!D[۔.$u󆓜_Sҳr8GJЫƑ+v/,*=Z}ΊOV[{Y?)MSJ6F%On oFl\~:ap$jnFjy;_;~Z.o޽DLoP:٦ewe8:|2{S,wY)\-wPN6dyXosmսtS}x Г]RM4/fB5'@M$* D5Gz'߼;X-^jf܇juA.l99ENV嗝:_TsŠ} H$kF/h4Z]8P*1@?J%]8V ԶV>+F)V!D37u'@D$_C^:GG\ '.`(&zIa2$99Jl6,_v}4 襢9\-Lg 襜E YEI zH^0qEU ԥ(fP('{~JaB;Qvf7so~)H'*ut"ЋtyX$>f< 397Պf*S F,lTm i \:W|F:A:Lw/dE( zsI)A2g%ѬԜE0甉(Qi=Qy¨cLMF u (*$+ւ" p77)Et.0b?) Zt:0xʻsj͈xbHsE&6X $ZRd(DDD-dHs#b``y=ŐGH2F4CSeȫ@,J1ń&YXNFk~ yˣ)@AFPBz$r3&YGɗJd VFz.a|bvzsƳ^4gǂJl 7OxS)w7ޛf*wųa3zp<7G@M풎1;DQ;h38X!"f@H%۞x^%F:AB,S٪?Lz7mEe{\?ݻO2D˫i# )AA Ga6 ci D!à.Ao5` E w EE İEDyzge܎mt C{ Bt-ehB-Rl^{퐯%d K.I)5 b1aۉm#GX?ȓ*r)Ibnj>y@Ķz% gzzM4ʦ$ȻI®"v)1 2Wb&_2MM qE$%2O7?{mA_rd" (8o6 MJg 3UOkǼcXi].VUZ ߰"uǡH؆qj*?VӴ2L+0P}Wwڛ6pmҗa hہYXi t125ji`4[-;*E4Nu)ӀnBҟ ]m>v%?Wy_"wxpPj~~ Nijeľ|L 4g:])`EYlU?NUZ ScبE0$@1D [VXc+' x.%Fl!TXeӑ #aϸMҍN2LI mI/6k ^-* bth[2̉|H|Xs l@/7GhZyVI>X(G08QXMt)%P# >z!@1%6jdCScG۩4Jlۑ(H^LRl5o|a9+ޅ==bݲ_hDcgb}CX7zqR'E -oFg,]: ӲP82w%gȥkDOb5k]/aCĜ^NY񪱷 ޮwRkF+I*$XMH8֚@"E%ʠUy~J~~\rMǺBei+VG!+/ YZ>| AYq:7?EdTuZ~*RJ՚iA9ϭc_{α=%0:k봹(%oz7[q@t{wC {PCtu"^b-B`XARe ua&0 0dWa&^`.sn_IrW~5 (/@S Fi,8$KTD $T :OVҺ8Pg$: rcqQN٤!%u]GC~Kv6$~Z^&BM$iD5ywL"ҡuGZhyJ!$Z)k\ բ`N ()SY*'E$N-az1&:TpUqG` 73!)~ڲD Q[֕p)Ki)fM[x8^!/``c]pHV_92Tjnl &R; ylbF$b0#Y@P80$ K:Ȯ"@ӔZcۑ(?x֔rVP!|)I0EjX W?CXg뒋K dRPI01zAA_.d)_7f{\qF; рkA`0'rځfLxyzhfJ:tMBՄԱ wڢ G+#L"1 )ڂBQX"U] I] TP(;&6F:'_ӧۘt>ܷ]6bojѨJV.ÆNNE9ժ:< ٞ.3 L*Uʦ~7~O4cza܌#K]oSΞ9údx1o>ɦ/KG~jAklߏV 3htغeMFOP~|":O.Ż"I(+3tS2ү?;[!4&un[;Bb܈)3S|CSs(A.Gx<\<2Zlr.e"Db 3j;lד1Rd|qb_keبubÿ4po~=hpխp80X8*9H7!)睛R;~Ig| cevX}ꌎ2l4wBBTinuzp C6dZNdL2>^XSધ޴Bd)"DY"Y.w䯖Hş[tz2 ̧ǻzLr>} m,ܟ: ?/ƒ|N0`3^0U7lQ fO'UNeh#^|m]tLi>+r hG YI1ݻPm$ Rp++&GnCePb:CŻ/:mFۻ /-ncXDٔ'qn D6T%3]1)C_݆z1,wnmRǮ-zV Zr X1Z_PzurN9/y{CoP% 뻧*xpgBPig󛚢6 !Frͤ#K`m@JJP7 Q턧:T$5D&.HG) AsiSMB 3n5RbKfJs]T ZzQD78kQ- [dH{酙/Vﵐt!z}%GBHG^M!2b*VZN@9;ڗaj* QrnBQ_VӲմhv^VQWӮO}}GHPi}W!PƱmv]ܬBfZt#1Mާrce}=ݾ_/׹6Ny+9Ƙ\1?J~`g,Yor-qGG0h̃%sl:@ CC۫Z%U5|"Qng(._iA"E@uL9ko%) Ôh*Hd=ղxqguBd'aۨyEn4_ })oHtVtN5*p&Zl1wRa$maޙsYa1*%hD HDDj˒Q-Y~HxQ}ӦZ ]Vs~TH|$Hv 7ToTԚڛFAW/}Yvu`^$&>w!Q *<5 RJJ lwlŁ=.(@4͹.eu#*=Kd=~̐~&;״> Pz虭ꇌ㻊~UA+};5jwɲVv⧃\+^+L2oPQ`0Z5v>N~7s?eX2^re+2r*8#jPՆg<#9fũqkgz  G訟yE7tCF4f31 Y-6mX4&C #1:8!M]Eٙj`q ̥ߵO̥~.C:1%{_:sv8B,F\\l,sKƢK7d4% 5*r8ȍ}Y`uhgȥ8Nta%% jw[f~ޫf/լi VTo(OHd'pZWT+).}Z^n %x{;\멹w x{= V <|F&'+c@U\] GRw6amgJI@\&wI8mTߴԗvX1}K5E,N)JJE'Y[V* 9);BM({Z Nٹ)cYn X(t!%muJ5ȣL8+Beaxm۩iS M˻u~}L/nM&JJY)DR`eV뼭J'aۨiS-\JˢiCxV*ToX9Z2+}f$t7me?g+Յ[i@؍澦c#]cuS\tW=!RC@56 nj;qǟ$Jt7nZG#v v#+o \ Q֏uF&Uذ GF Tmx>4U%Ӈ3!iA@}lQJ4QQ>n4(<8ĤӰ/6`䙐~&xƺ&Kx>VFM ӵ>H8;WRYPBDuawTn"I|q"i(pV, !6.C:Iye%mf:4rlۑ(0+N\/,g%%` bGXvKwq8_QVI|@u/SuuMK-hķȞfF7[v7H?tg.x bݙ ~z f7 bA6A,JF,1 bQ̑qz\UŵpNeR Gkgu{GZo n)zn(lQ,VeyyeUԅz& [o<rޔa9$e.&eO:) gLAYh)@6TRuyh0-mdj)j)0-E`E; l]YQQ]`T)]_JTO׬#}l_uq|3F0;no!Y-a{Zש' R̕Qf҄iiC5N9e-3m|BaB 6T{W-=i- ;$ڰOSjiؙjR7ag>ڜ,4̷Z-b-EX RSSçi*/훥>櫯k͊ߗ./5໇ȺqvT!:54LA5_C }ݧĒUCTtU7c2r2ji-+{{U⃵j)t7&$H8GqBϣ̝p?^$[ŬvsJ"N@x&ܗ4RWӫ {czJqPз^\Wh{BFJ;h(&=hGw)v v=DrePPi_XWKYjʚd^du %v{45=Ui5dI9?Uy)v*o)FUMpąd)Ai%HU#,Ɯe Br"m>m^i,е5-(3sΤ"Y+R49*[ ZH h)0ya `'!-yVEn֎'INZ@- |oQ},,5Wc)/f/T5鲨}W+<΀lz4ywޣu}._-sN{}6mϪW_i|b1.;y;xˆxB.}ۼ⛥mot˘ؿ2}vvwWWnG;. ~gՏ{pjh 8?oWm#-;$ C rF\12gNx{_ء91JQO٧|U~>=RG&bgk\>|p[fy]>?kܷ纍t nߞDQ䑖'(>xޕbaPxنC״!۰kwT>j-lw|1u=?kzFhfRg` k&9K;złvϩie':g!4NxF&=jvJ'1,yuI#,7v**j&wSv!L VlvޙjRͻ}s~fC"ì˝7i3DM[q!;=w)m~_^]Uk {Փ <݊kHo< 6?rj6O&Xc?x]-Fkǽoo„x)]DIC^޵n$`&n<QoX./SFnp֭ y*S ǹFM\yMw0/vK zА;:uJ- 0(ynWz `"6J>`-"(yÑ2̐ w˘U*V<>J> F.^2z^|XŇy`g~0OM` $F`k}21gRB#Jڢb?CuUA.sYΉv%LSV܂T]ǀGF׎QK ȱ8aUk(#51Up Q[^c1" ߷㴾ۡU I琰N>Z1ҝ '}i|*^%D)D} DҚe3.yMJ{B"ŢW2{h^DZG ,YK0UتVm~ } $6KsYl֔)yidQ:/r\=nڇB^:KGrBQƺux|yK n Sf# ( `ƷrXu$C@sr 2\~ڧ& H͗j8f`W#r<| @NB%Jo ѩ\XW*c sZ7[(t$rô/"?8A%r bSM3dN{Y[eYM 0cT!XB[Zp[#*[T\yέֺL"2E)]VSSmA ޟ5M͕,JZUS"Kb`>}}i[OQKp\8GA D>6w 0=Tx!fa (FC9oJN9\g,އ8H-?.eM9\rغx}OL gn)Fp>PH >A=a!@ Qv>L2c}Ԏ?yd` G#?wc0“qpXQ DO){B ؜S^!_9i<$ nvri`~i)9p6bI sT(ŠGgH<81 JZGn^Oʇ7NDl8k!;za`1s=m׀e0æO;'XfR~Hjzvu;j~ҧ}'cXHZqhcT{,c]F\ˎne/ T#d+)3njz<IF tnTr Y*= DCC^&u…4A߈B.r2ݩ4, y*S}[BZTN7bۄ'SGnM ֭ y*SUAZN=6:?򆃰nQ"pbAHP|KkʯX!dQjמŠbn@R/XHL.ǁ"ҪQ(ݗ/f6;0bH%(/zHZI5JniwH/.0RUx3,9 uƊ tmK7߸7-ep$PW/+ z>@f/ѥp[rz({l8L f}i&pf!J&@ML2CwEmp藳ᆭz[oJ_˧.YMRbq{uhmV-h+{2{|UW^5߷Usryy.BVU=dFRT\V=Ekuw\?/ToCO&w w2{^iZ}y*;!Y.O_CUUR{2(QR@N-EUQ4H_C VeA{,<&i0$RU B+cylMQ[( a,U)YMy]Y,Ъ $K3^2]u͍v&AExj'G6j ?nԢX@=ֻP Zh4 ^*5T+^aPX;jFq$y =_PDd'鰚,ٖ~wr,'8=Qw I(bS_QЋUL51 6i30ݩV!GX{NtcHlD,jqA1nx YO3ZJUrP^S3` Kg<pyw%5%w*Jbɹ.GID҆[4{.GE(n 1ݼ{8ɵ!dp"f?^` L=Ǐ2^Pi ,-pLJLZ?ΙaG<S#80HLi ΄.,P@A5EeHTYʴxӒh@*D)erR,KB0m4#83YYlZ,~[;=hhMӃL͕[A΃!W"E@g͇ۧi(Vf^"9߬"y7Lf䕺*5/b$Ǵ#|Tw.q4W=Iz67"c&=)|̦]뜧LU tzStIԗ)OBKT~yYħx?UNy"xz `gcOg^=}QF,~wuD0-rk?~Ja )A9\ nNwo.\yN?k.~:Ԑ Ujcd}xwO nt6V.9~avʯ߉#DYI=*s[)[RKR3 /rcTw88ոB,a 5Z;BBT:AQ\SeIG=s KBjC <cf@RMRhU\iACe.h,j_˙ LTLS.Q^C TrYO J~(I~]U~^g1zAGt!~o >}v*TʍV3}7G!Lu4bTN~z} 'Z/#0ڈbtg?-ur[ؘJã???<\ m!)|bZ0<賝']wZ ?י2Bzѧ^\ p0er"/ﭿh|>j׌S\2r)F?ܢ;ਂgc<^ nf14cMwԋ)ڻ?VGTznr>OS%je×-,I+yq*ժOx$[Ik^=RI#W`xԤQ`!7<[g^s>]3CfyҸ1G^+ШwREdLQzԏ\_xF4wFMh:D"jSQYRtQ[g<vb"YPWey -0`F^rlX$we#&CJ#5l;h}۔sT)Glf;ǺZPxEI-(#qwXc _3~j"1%cR!Lšm _j}Peyєg&oNhIMλ!BEȅT~Вv Uux VREGJCTPȐ`#K#EDSU5@^qc]{dhV8DWł4$@8/ފ)l-QIFB`bT*- x[(G#ȥxȴ!P6vOm65Dl<҉)"q$h)`A9AHe^b@@ q1jTEH\1tc:9ZIN $KɘwۤF>D,Ҹ1%ʀ`!f`FC@S/ ?cLr}}LnaBFTPdMyj5?̪5Gڢog7z;i|}BEyGƞ&$#Dg%焽̄DV<'u3+"cB3ȒP}y0ƖUD(@'R#`P[h7a]gLq矂h3u9tF3^]Uu! [  Ix~cAI a^VYhU_+=su_dH.^GlDdY К{nAE YQD`Qu"/.pX+y]MnE;^U\|+^WhKn"1u;.2ێS꓄>h-sp( 9iݐjUYtޘ) t>DVVYWjEzM&IAPj]8§v*L(WMMbVH)\SK@<|~rdQ( %-ՁXPW ^z0 x&x/de|.oBT 5&(l.[_r1nҎ2 !r3*O ofBMG b8J9!3EC*@'p-z8rS (5LooF9 p,Ȼ>BKɒ4nLA ۇrb/INPM fP=&*4VMiNL,Z a@Īb詭=R[.5j OجẁFӈdߐF8b͝,PJ_ $i((YDbk,ÂY,5Nط g#=ϼ97J=FOu`%APR>|^%JyUzBV@DR JmƁPP6^DNІljZ)mj"v5Mnjs廝L^GD[*# D7fQYiqn]F!$^Q&,@نSRv{.A5\ EUkR ];s.(MWFNn,m^0DLőB=`$0=%xq\ݺҗ93\.^/sFЉ!Z']6` 2WOnב.&~f (foa);=۰D3X atbUhI<ּ!9t{(UDȾhӜ>\XyzzV}X~A1Xn8ù?W:iQS5tn4C`3،8430NuZ~; Z1SG7BJzl_Kӷ(zOz@ɖ^w#/՛Z/:Z}}}bJw?1^?ܣz0dhN#۱F8ǀ: /us[hJ SLP9/ G,/4Wïh^7X5`tɣ4҇-jW!ZndԌ1O'Rh# 2pI%(y'֔^RX=H&YbW%/Y5%Ղ~2Yb a~gM>Jp"ˋ8%Bӆ{͢""A5 M#xV<}жKm;URx!=Y#oc@8d|̑P/jCuX~(26stvOoounwǁw 2){UI( ջF]@(Q$.p/({VT {ؐ 4AmJ%b>olɡv$ -"uKw)ӒGo X_[DeQRmyœU,]H=Hh'_{~7䈱*2 fRDZ҅m9> @vUueAoF_>yBg1ʹٝvzM`qQc#F+~zwV>0VQgzh7UG{DŽk]tTYNFQ{ iuGA@Nvm~︷K{MYi4 {:֜f_3(vF_~z9lZtǤáM#]0,&a3؆dNjv¹ߨ/** be$hVmwvYXt03K`$oS`m12=m[+n xzH@۝/Ũde hBn<*P=qot^*NsШ:vVl}=ľ7,\*BA8s224z Rʳw=0NDr׫ϟfY**16Z TeG,}cǁ<[݄GfU,MtY6iҊCWIJeFu_dМ?d1ll rflvgvNg͇"b$E6TpԠhm# B+P0zh σAOƪ]{y"9ɦ,>/=}ǚ |am#xY!3Ju@VWaY?Y"\di&M 0.<-=LMBs^9t137+f xg=`qK%[1Z]1e yP6`zyRb1 n?]آ 9C'b+6ڒ!+6TJ;ya b5X'q5d6Rd/ju`s!1Jx:TS)|oz|ec{Pz7kYǤӶ%筢UbP[XETͫԑ? InKg֡WDVj.fɞ{qAyrRFY 3u1+Rطc_Yt'luވMhO { Z2+e{,ְzUqߗN-:W9^bIR9Q]5KyQ7:#cpN3f;8#є(2z)Ё'? byxo3-af#?0ћ@@:3sq* 1mGl+9>+dI-PMS[4Hx`IuBRوrhzDжc#c :jӐF][-9'>~\i;vIV-;Yd=?feqQ1C/)Î~ՁPhUַ q?u{ \xp[fp zGck}'v)͋ZQ$}Nzm&ffx, %⛇6#~zѳM)jj[I}4Z7:)$ہ+YpqSΥm+ ؊ (R0,b;QЅj=2EmقP>AF ϝq=a+_ljVWz64dkŗWjnބ,O%г3uBW}ѷ{)CXEYź"Fcpe;HѳMwt9tLn]ɝo?7!+ o2=,E*jWJgz~aUN :]Nrr{#V]@@tR8 Y]BT^3t¿Nwse|外g+7Jm8b9o^nEq# vNmxdiuw~oPxn1czﵽgX~kW<4|wwKZ<ע 9۶s#۶߄E7=3޼.TuЉs4ЉғC@Glگw4plC}5L_j0W̞_ 0Zuп4PB) fsIu.Tƪ·Sn2]ITqY\xVVt r4IHzdKOz2=s'T?7ՃjOo kD9T]Uj#Dl(e"xmsFlRFGH[sa,78n*'+d֖X}r6P;XSHr=?4;Cd'x܇x+?~jWḛEpC{ѿT8i#s7gm'噑ʫ/Us}o1H}8ʞ']_cIVgJoj8{u_K7XNFV|d7&mCLE-cKRDS}ŌthG+g텼΄Ž&~'l.L̮ɬ5fZotimF@Gxp@(bsp?)?F|^7G@x_{ `?vbUmmZŒva\\m>7=!Ob-jNmzPъ 5b3Q~'q{Vswa{1TxQacsq5}ĪT`,,ma]o[;rW|Y | a|M[[܋/]i+I{lɲ,II6ř!AƁE %eL(TabD[gr\r0 ^ \ " S.Ei!;fOfjd$ɇrqKz|4dRZT꟞ F qqS͆|;8;('aD~Z$\xC,+h0?5Vx=ffk&J'&R;29 wV[J1"?ZUvoL,^4RH\zVd LNbb3E0jvmk X6Ev>M SB{ͼǬQ2+Bx(._,Nڭ j6No?7C`H,xqҁv,] ?dݍ RJY,0$Pkd-)PI3$Yv/[_ۋ/GˋQܽCDƷkNgQ;#Dd)WkވTS>Z Mi2c b`FAk!;5CWcu_-{HQyGi85BHʬ%#o Ht!bgă՛ʏTA@wP SI@,5k9# yam8,Iđv o,=J.$Ah/d(w&zҗ]o=;=UкDim2TN.xvδuK!9Bp(gskwd6*#^`3!mfl*BYfx0%uH`^k[hW'G5 ~*miFk쮘}1_g/T:D|~5hrHrsDa8隭{_jڌeDbn3[AezA,:OtҐAx/=i)"3-o _Z%BW?U޲q}槾:@΁~ 9t ^sFi'7JZXAm9_;;YfYm# <*~ڹsyP3ts"h%G ֮sGsPqwG]Zi#pIzX"(b&T@FtZbs}QFs|FlFy# 6XD>\q( Jc|P%x_> @ ZZhM?bc2HRK)k) ZǂKWuT%>UF|2 g)bv:+ F1A䥷Baܪs'h.◦r}\+ ^w诋ыӃte,kΗPRԴлv3?T4|_&yL3;zFriWl|Ϝ/oHLD~G3sw jnHیtvwx I8գ#{3\24V%lx4\G顒o_[zmCZ:jTtnb &H,!!TL KS{̼{I4䍁C&3__e?}yq," @Y,tH'/M(c'9(*U-`Úwr}1#Y@ԐFs݌fg?8%1í]Þ"a#wQ|{rrm^ۜwr-}FT6υXns^{jj+PPOZ)}+VǿfŐ_Tc{cK^B}{EVn{v۶ _b۶z)lG³y<}eA~9õIk/.xZZ÷6Ͻ7';ttK;W%hEݠ^g΋8,1ZBAŻ4hv}4i$r^u.J=8f|[hJ{#<| կ||5A<=z ڱת8:Y[д:Ơn n3taͲ֖ѢLZ9%}\Q\aWn]^!zI2؊>!\Iߜz8LCyL&r]Ldxϊk ׉=ww-{6oҼO$#΢l։qJ&ߒa+q⬂XK\ޢbyz }~~WN#Br!͞~w볩볩볩Y_dtP+a (}^J$&\6+ aJs)l2'~p 0S<4Z?s0;˛torAZDzKsR٬@"D͸&Ek[tH9c ,,[rbJYDG3bqXbDnSiYҠdYph[9r`ǒހwwRy=occdK>ʡ i#3hɺ &ڳ|8I/ t>6Ӟj,zO#X(]QdF{e!5kHZ,IzZVcki=T춲U1m8'`Kl^]Сmoٳzu7 9A"@ťmW:w9Q 3C5yեD9vZnπX{T/9{ nV-W͚)j'7~OdfutK&!9"ybv{(8Fhw6Q,uXދ$gb"vq^EitR$Dڪs_tNsDR2?<ڋ|E*CFMw[..g 3]7irICܺBͩAt|9;䣣=&URi9a3_.;FK2}d>j;B!n3l`" [Cmz5U U&a=]`0v% g7,f?]>yvp\",S3#Ȟ·''B!?-Z' H}ǃY<|b]MzZCg1LFQ.jik"3ey`10iAM"i(f%s(-ՓCXm"Ck2iz9j524Ec'=~m$řhgXP0@^+3_2ƒSElkh까R`ԓfLVxzZ &vJ6qR dR)rVKe7SRw"iN>Ac8e-m" "3Bʠ-WI}K(z79^U]8 B7]iD(d3SdAe@{ھ*VsMu9̱$kQ~I,E#SgMY VӴ]bz2J6'l%XL%tLwLbQky?h阀Rl\'7_|ެ5!!ۯM) XNj D^$!4ڃPg[}&=ZkYagʚ8_Ae3n*+rwcb+';]GSx77$ HV!Bv>**ɤht,f&_ h$*L|/Tݡ/_`e%A$^? >[u֟J|qʒӢcRaPI/Jrw5iK2 8a8awg0A6CytlLݓ0Z-^oRyׁIIAn,yNU~#.(Q6y9O(@o we]Yթ4K 2&84=z,s<&Me=HnY1:Hu_ـn~ݞnmzDfιb2ݑP2 21hDTPyZ?\]@> R]|a~ՑfQWSUVWHLv>T2)B&*QJt<Ϫf0%f_ΎdǎYj4MY’ΌNr=>Gd25R[ AREi=3s_YT*8'+ e!H}f(F:VeOf@ ઠJDիWV]iA(.8|LA2e|Aژ AԈĬhU_YyBGΘ+r,]P`U-5pB`5rYo#C.' wGR^JaBPH HX U^ 6bijԤͺ e%t gL )XY^`pJȂ =BRWr92_9fFwVa7+_ \0`Y3]Z4K渽8;>?~nq/yD. Up+k8/PwMh6R/[k1ljrnU&j[ nf.Gcx:"GHZe:V$IrbtE)*򕤆Wԗ{!$<јzG>s G{33K .Pb me`rnif/FjP5t^L;dɮo69kg36 2cn=5&HjOܖРt4k/4+k^*z }t泚XB*xzMr_ Jw&Y3p6PfM9J%ۻe+ Y2e+[.YZO"㬙j[m̐xp}py- R04~r}>=9A8Kuɻ2USi4OU`A¡~rR\| %2¡U|`"V>宒hȦ6qusK௵[AY+9菓Tr3Yæ@/Tg6LS u;Ȕc>sN`\S0b,_0u+/fISLSTtEa6~=xryIv?kjrP7E/pe|诵6LʫW?Tя4T#yƮnp}V9!u= Wl| cώk&(z~y"0k SgI'>ʮI ^K&ED.@I8UPR[Uim(xI< 4٪}O=6P |T Q}H5V٧=Jw"3X`ҷ`f(}EozQ(LMBj%oTpҝF\J[R=u\T ]Fl sD+mZ|mϾ18Bߘǩ=\Vjq&/"НM72x'ݎ- = GR$rg,m~Qyuur|~ϧ[Zv3vs[B߉s @מJ=ճ4LHIN5':z]Yש#oӚoMma+/GltRbW+pēq*~2C\WDI,E@BoczW#<[MŨi)y;nvoݳlS@0Z`-\ڳ89>-إ韟t)(*J몢Y`q)i|n`&Hf\m,Uw]uIk|7[tBv{ݬK7Z[kL1cdӁ`ZvҧpSX?Lv7:$<}4Ī>Ẅ́~T[{#S̜Y8] LY &0`a $ ,x+F3!2I>)Gk& ZNdz1ySR!YFVH3 4j8VB﬊yn,:(3nBi@8qBDҦz?`Yem\ڥ붹]MW\+~`f9 ˗b"I4Np}jI<46λtܮ-BHo-fqu$ZXsO29`a9kouu %փb)U&jeZ,|0R%4CsQ8i(KUB*E`=,j&* f^)ÁAD=N"Ղޟ1\ZZ_8JE&J ^6}DH vhY /x(M,`%\i#Qee*4Dt"ҏBi_ɨ~Ъ@S:1yqRLd}쓮$aoYCty֝^31Gnĺ{,rybIK9-m@Zzårhњxa*7#:1Z[:aCfX uZ;_T1|XYВIe7 *ivQJ rB> 3N`樳x79hݾpUDGS)ydz"LF?_hhVMwЊsC+2ZIHЊgL]B+: $Gm6 A/فP~+jZ[ 6ojs. j{;۽X7 nqK UA׼3ԁjp9\{Y1{M&At2ѭ7'> d}_bu_W~ק<9oU+Tva[oG߸^&<\ī`@ҩROGfM ]|$N')2VCycZ}-fآ-LPpZ.kz`Q ^T9/|Z.l]vx?*>;SDz XfYo_M۠Z5V?cbZPNB_ӏ FY0݉U0] 'loG%!٪Ю\ 3J^iђg$XV`0d+V9SRoud>lB-9)T%8P #1(ҋn0Uiteqa.12c+ăIS;'G^Y2-x4a%DOkͤJ(fNI 2b9 ^o=/Ҋ>Nʴm]si8;= ׍i_YX-]S9d4`Ϳm \ƛ.o&ӏ'gia &;~|s0N&Wәf_Ɍ˞cd=;:{wTRQXÐ:WӛGczNZm{"ޅFir/f"ޕZ>`i<x᥍pTP/q::>A=% )I>J-kyc罧:;_a3z;4l)[p"e.Y2`i%fݜڐnUYמCV`7X3}NB_ČS"7ZS̩1"3w~o3 !<[Ak ̚}+m/L( s@ߜ7, %x&.IU]SLE7[dnTRQ@y(ƴG.3!(-K> [ՇT n[;RR3&\mÐ=yVB etUH^V"0睊2*Y!=\ŠTC%x DJQP2o;.{-Q4hT*켬yqzNQ=A">j-CJ9ԫ+T@)<6T[[Ry(N PZS=df]D)f83gjaoKw2sƗ$ATf5RNќ[<)PL?~?FLiF-L4I3 s T l wȶwLq{]m7Qj_G­k4nD5{|Lh?w\ӁJ)=ll c={\|䥙4[l9[%y huf:ZIG))@ 'un$hgmfy)G 9,_Ry ^zmG&%%{X,RHY`8^bdJaƌf=4F&k֜J/M2*!IRy\r`9bqc'Ta:9#^.YacfߦyYv‘mvQwy 퓏:M>LuXI.F6 l 6T~!#@75=*JG!hrĖc5p9bn>T׷Ň_R?'7ʔ()i}g).Ay)MlGWp"*^1>pXy* 2E˞Z\Y[`l向juCCmyg`'Oq3mI787ގ*H3x|Ʀjj؁N aktȍyer){UZVz>GSd #/EKxJT(CG8ZKG"zN\ڑKq$J &| Dij",?$-|P X7aGfH(zT(ʹPa(XwӢ2 yQ޲&N"7T%ԙZm;{e>QCUjQEi ސ^c,/N/yśuOTbXw ߿䣋lܻ 7^Y{܄ߧV tU{㈅lSgL(zXtOO .Ac& s2lȆ_>C6D7׻BY|jjb# ,>zGuw2 h|00Mxu! #vEdPZ84j8R>/+\FqA(-HUmW+&鹡8ՄZ1??Ck}旳Rźl]P*7xR,e Z~g"̷oG5pQEQE \ ~~u1#9lt \da`LpA" >?-YlUn“򔳡/zwR &4/Jq,X,seU򽥙\c^+WHkskp@P)z4,8#M^ɠĢJyDC䨐{ 0L$ ]#7Q ~ D*ψXY|bV޲0ϖDvU[^-\[ɞ]1u_swK˶(`j/?oGYu^WxD{@tI߆Yn%.3$ +e$!e)r='` 6"8c fאuS,"qs0@}0w-1n J~(gc=YƔtܴYI9Hf`=?dh[rR $~c4ēCv`ǫGDa1h*T! F#I"! Np\M>xhU[__R/n-Lti'f0FjQR ؉c+/ןL#5JdݤGS[6䃉3yP"DFl$He.9϶@g,jߡă]!P d'i#" s2FH9.xhps2ٽV";J/uYaKGI,BqyJ FODlvK@u5ca9{ ,!qt%Jݘ bJ/qB Ab[<%ȪKi⭟%{ßOa' }|J3n&(=_9M ڦJ:@X܆h 1CmI6ya4p !!Qf gDݔ Kв'% ꄳK DܭJΨ˶n,CxcFE4jφcɆ$uJJ! *(r%Ki@ܩ 3B+tՀҹuQ5)ރ喌F(iΧqOLUI?5Ӓ{ yڄaF5Ea4ªVG#fHs!$p!9EsJWrTϘBeٓ^O%Dv|FNjVؘՆ| i| 8MTbaf w$}+(zfnt@a9mn*{?;ƆP<#Iv>&*{`jcLS"S>v.]E:ԱwKŚɏYD ]AhJ]T-o !It})ZP?h7Oc(~,72Wjl8}I. ԭ@]TӪQHʤGd1 #T2K\%Q[#!2%[Eh s^&LhRt(Ik.͢ZL,LPuÓYiM1܀PԛKs4wuew3cYsf1?+6x;dt#LottS:`kԒ :A+VtXf=mnܞlu~wPwg4WmqehXDiF҃PC~x9GEWH6]3oe(qҿ>u4\fYwgi~֕ò['/e>:og~>H@/Ob `})y; }" #O7П\T뾖ooWkGؒVؗ5.Fg)OM˕ :.uIɑ]dTJu&bWd;(j8l8gy K$n~7<DC])Lr-.-x%0m:ZrU;i+J'7AL&{Hpƛx z>jC~M~QxI3Iw./Q&:'9"(0ƞxP!4ܠVt8CB}]sw, Ud.;Lg9;urX{j*6ُ3P|bL7)Z\)hD* e$%EJ)eg#$W1LS6< qr}kRR)R׷עL-P<1&)>!ETX)~$qvNۉ.Y2D<Amܭbp EAMb3:q_Ѻ> 2j[K.f5kn A\W0UE;J6S}?X|.Nm@t7_K)},&qYIX&N`왵YEGTSXHp {&D^v > х4_A~ޜ^Q}4PwkRw)<0?a'~]I_rh!bR+\~hekReXQiY4CjF10e\r!7[2Ms\6c1w F|Dqb3"=Sŧg|ܦk55ػ"D~gB 1cXM4:{Y(($첾*bqJ*Ӭ?Pta0z,4o&xLj7unz䁽H,zZ|F$Q@YbS62 uu۞- w*]JOgijg- b5f7X>844@Fŀ8HW{lͫ)Mlf?3ۋG 4;0SNagR:EQSbUʰMҾLUWXuJrۛ6hD[Y\i9Ω(EY-X=avձh=rMּBc;g6ɫȬD`'qjԳ7N60O~2?ִ!撜+Dqk_xj bc!w3=Ix+ޥږQPa@.(e'JzNbdM6h$L,#i{:, k_ֽR~4J*z+7oi^zW=G3(DʡWpSK"ZMnvfve3"*gbąJL`j҈ţ}31x"GB?VXxeJl1L3hx@ 4HAzEZHh|*14ucyt&D_iIܒ&nr}3X\VpM픈Ãczey$,(2bՖŭ3XO;jԱi?^\"`魮b]brW:$}f!>~Tm;1ULiy[zZmz\sqNBS$J ى(lDr>6Z7+1ZDvJWj6EZRAerdV|6@:h`tb|'.I'rD6bhubrUvؒtYmubT+b)c؛>젉Lp~z}`\oĹӥwU5@PN찙vXymuV=tjݼU^(6MR%ޘVq<%,`/^vH. _":"~"P1bn~f7dG*Z:&rY/;vĔ%*.(TI"f%o3W&74<s&ouۖ: y٥|Rn2R5:_lxp"x8СWn@J%BH ,{YhSWN]$ӝ4>&܇s :P6K(@N^E/N<gjL;O6}ۻ |o|eɔR4I@܂*4H}EZQ6|]ß)C}5%9VVQ' ؒTk6'[OMg\_Ќ\h@ yLJksST.ۧmulS)bYc[1 y O~fd4"Z_]>,/[(4yLj" 9rltڏdbȝS{e (ܯa}5 :ՏE鵤*Pc =jK 279<<($TQ2QrO OZ~wBWZ*V8K mmH)pm ^Borlw `vF븱1'7vp"'3Wy?P X=-EW٫FV1Z/^aڠF0|oIW{b?)hTM'St~Xs]V21 Edztuw?X/~'[VNt8CE : -Zj{yIktF'#G/gtz/Dm`Y+рNcQYсhŽٸ=?o"aU ػ6$W,3Ey g0؞iyQiIT"))RbaD2̈tn7x2C3- m_KNpۼB+$|P.\gC<-9ؙvbMĚ *R3-K2h,_>;5@9VHnT/'Q@U帓IЈme"V5>Wxlwøŭ~Zӈ\ZM2v~7OdAU%`.;F@gzIT;tfY67e`X¬jeλ(\,o?|觽|919c=ݬq*1%K:J !is2CnC;Mw?| "u,[.O}8vu}:6w5~< ޙ靈uzɅJoU<6BEgN[0_d[ޱ1))t^0{NK?1A i73ڝN~Ҳp*>o0bc^.v&Re 9ٓ ^=}C(V)?@NmY*vqvjG9T*ڑë}J㽑r9޿ڥ 22}(#ڤ 7h>_!uPjЖlHoZݝTtْ"A= \;.Y㖂6. [*A‡:94bl_V%6|GJxqa.b-V&'|y^L+g+D"c?sX+R(U"U2"fN/y$C'L|%jRj9߾ Dr3KWycx4gR>>J-z}BZJ2f%ucGl A2mP2-3WjLVB'A~c+m7jQn@ !iz wh/yY4JsT\$"p=ӢX^'Z{2VѾo^~=YeCf{E$~|~?O$Rg @T ZmE%piI4P 3:.z1-gcq(gTy9&AcP3啱&l}ݻ6`QJTNN䆔ԇ 8'E 0=Yv#2*\mE-fݿ\͟Y8ro|>b1uOT?%ގ}Wgvun0UKLOOoOг'ӹD>^^7Ny+W %^v͟H7u '|2]l/6tc"iA׮'ќ?^_XM$%Gn{>%]y+><䂹)l-{3ZC#p L%DÑWb1}b!юGUl[f"'h. 9( Ҁ(,Aʜ1 txN۸<+Õ FD6l|#NvS.uMuQP&7gk8\XwӠ_ . ؕ:uolee3ϼ9ɢzV:Z4@C PWpV5S;PXHUB%ǏZR-5-R9.(̅o(%۲,3 w{=17jK2/q{=E= nSwXJe8st羇~(y<+\\ xNO-Pr43JώߛsaF x=~oY{93гY{x5%T ErqDžmi{`3f'`9QF ^|c4x,xkM=RCXHA5.ֿۄBûGZQ`#j~,[m4AϿO>}\pӒ"'~o iJM f|M{OcswW_}=^>B_z6t?痷diiJC11Jg%1Bs-蝏C_N~i|V6Vt窼yWZUkȌZDUu. to4Z67K{3WCict~dMd헍38ahhpvパ&W \%5#pygdY7}uZ5no|{obec}pk>LJ,5ۛh|sQ~^*j& X+tJw+O$[ȱ~<]tUTɴqE22^π:L湦 Wҕ*6B &̞ h5bP;NzH}ra@cˆmf # cFI.X< [Lzb >P#jhk& B9eLչX;ul[(?4f3JÌe۞f+GSv'z.N,md6fW=qr@ aJZG?ޯOVʵzɴRSЀby2A$\$k,ޝA5I%P$p_${h,M(?P!WVJ~L6]W2A>o' azOmX-:?J I됁 c*`a$ 0>BR13YB6Z;z0ڄ* '" (20N8//hmT[AN5cp6uMybQ;`sJіw<qA(b􂸴yIÝՓqcz5`#/j#I.=ݹ*oFKqu$j?>ΗZ<@Ҿ8*@_^{hű ʾd#y%{y0h: ̐*JPyͫ!Pcԡچy@ۆʁZ{`֔?t28BV֘,0ZN)4wusC}#fǫnw5q~&%\kߐ̾ h:xt՛W!,xd|cYޛ?GӿFCvt{0Ub_ N]+L]|M1y$,F~R~1dz4n!TlŸm`Vy'ktqU r,xYS㞙,"U4䅫hݷnՋnMy:MQǺ.2ڙukhАtJ8Y7E*+֭)RT;XC"3ּTukCC^ԿVuJ E+u PJ f<9Y#^!(Q@rv۬#CK~Ȏ bvlcTcPWGY%) 䘋8<)aY4HHxi92Xg?ЌF*5d2s6Vx_s^uM/vmm󺻓>_ҪeC%)#l/X9*ASndQJz-zϸ$qy͐s ?(k k9mk$가}Uڮ$=h?+$yxRSy2%۠# !Zx+C`(T90"IF@AwMsiޱQkLvV30jQb#.d5Rp9JLNT(&qS-O ((l4$ &nF91 ] 4" ?[HFSHoiq3R05gӪρyl<mAlF5exIS9٤6,ɽF0#Dq3-wà@(ng\ǭř/DT8Iܗ ~{8Un/p-ź0<\FY_.SkNn>VtɅz⊏r> yEtZ/')eO/=]a7/{S{[}la T]RItv F!+Fۛ&Lɭ^ ذ7!,`RaLϸB 2M" sh U6H },v컼FJj@lĖAV#Q錗0iR[=s[5Ji0*E9!9T:.5ӐWOhE;2m6SSޗe,pG4<+&rJBvtȑ(˖왖A`dh4 dy_6DMnvKr(is dFgZhȌn@ӼLSm/ e+ܿ EZf6qYr2VF1)k4&w㫚ޭ\4W[Bta &f5pT0 8zh(ͭ\u97Fk&HACW(%`(p ސ\ #l P4b>X q#c9(CD-:. $&v[imA]O:ޖ|> @bo-aR 15*vs:aCb D\TzvIl΅mX=m'T&X tֵ,lSb(aҤkԇhftO(YXn1wE9Bjc\: j C$= ji1 ۳w,ݭI2Υy-a֑B#[ a` dP_ͱ; j1G2!\? !|{o xq9]^Pvv+Nw]ICq.`| Q櫀0-_WK2–j57ZKL¬s} :8?oNkaُXvrU*;5}fM`M֓A$\$-* [w"I/3ʼ"H}؞`üTeUGKrUW/,Le-tmIyň bsH=d绛U dLQK#طZp^ >҇Hye( aB5?;{W tfߣ^v s͙ƖTσ4usΗ $aJSKu>OD}UNFPסyOlPl؇ok݉Ƨ<6AB0O&jMQկFRWHUيH &yQ(;[: Ʀ~vٶ )UbG=؎:LG [ntFb<1lX!lkcl!%6Y%X!UmCIcL)6!!E`+o2t` Yi ȇ?s႒VGvQ>K;̛J e`Ai*.ʼ ~VA|0uJ%T {b@өfh:i35B4z5_0Y}2; M}= )G!YZ24*p{=\ Ν}Ż(djB`˱{XDK9ӓۍB!OnEl@!+&ڵ=de)q zs2wW W/J\\ĵB9*(\^Ҧp0qz@_ni`to!%>v宋hJD8U)F_ R/hRJ+#ܐ`,$8Yu0'(!AEYE+Ȟ#,y:Bl3AͦkS}$Qy9H)=ZDC*_ zʩ4Ҟ@b- O\8-MaDVJzƄV>BXoa`CLjU~3ݹ冊wíllszGPؕ*%|j4V6WJLӷmF8Fc"xٿv4qk#QΤ$PF)fQIտ:#ٞX~ , bȁwjy&\Xi)(6!ْ#uRbFHۨ,e)z4(nE ojE!"5Cp HN"Q;;7&6DNjS$]b⬧`gvǣ2 wj3!>c$$h&YlMk7ؔ:Ezץi~dĬe,Q8e1)H}"&^J_|?uU+{7UӢƫ8jzj4ʎƷ8>g >\f7g!M3>vyAv4.Tԅ.Ds~vJKٷ2Ɗ.쳱9g%\\׾7#F5A54: T{y͙%#DKI0zd#l=2lgUS%M||yP #H|rX"(Oy//,y&p]7<(n⨣t6NoQ7F4B&ؐr5=ؔ1Cy e,i1t,JdAWA ,8U0 BH_J!dֈ 3qUI u>]N}4l7 cTmGӍ VBłWL[2=6>SZ^)-wB!ʡ C}ǝ!68SqW/p\R9Էhiqw:XԷ -D3VBlN}g0sx Y\怍Ѷq9} O-Z p" 8&W.zI&[m<'{> kqSzW5p-ݾwEahrsp]4#4%i .e DJܘi)$:,c)˜ArAQUd4:Dާ&2zfNv:#+pO"T3ZI$RYw)VLޘj)Y߈ ?mXSf +i±\ipNȪ\zgt||ɗq䇓6xr{5{/ Rj6Fy]8H_ձaz]U}cg! "տBu 9uԢ]y k€ШњuLI5$.iC.'N~]1=_cq0^r@|yƛ|KΘRkuvO|5!OkT?nVF44It^3ZfX^Ag3S) fSg+;߮'_d>4OUqσMrdK O-{+xR~N/_U\O5K:LeސE<JO`|۲'eFr[3V"6lN$_t8$jf3z6Nt7`WVR?R aQ !՝NTƱ~[K %;Pq˩] ҝrZ.S ?1CPU`BFWȥE _!ˢ=xSI1m(&fQk"i3yZL^ԊAnJrJ41WwCR0F4Rw/$ltf STv%yhɶV֤tR O}'a49-1iwn%hT]ڭ .V^3U$'q"gJp]B,.0_(0,âK dT`F v0[P "w:ەEi$ږi]s·ul*VakTJD+39*d^ ZH~V6k%>(xx:?XX,뛓ȥ>؇^Z"8鳿y誱8 ʚ2_p5&t*)]ӓ8Y+r}uuS5T9/~B޹v)a#m݀ 8U%mu:rgޭ{ kwB޹TaqC lZ-, &hn+UuGK'EzjYThcd!8jI>TBKWkuK}{|]7JUhJ)RfJbq>2+8'theL=L! T2+G+}VZ le`-JU6JVʬTQZknȀ1mH#%r4^UjoԺ#Q|/o Jz7Q{W@]%I÷;Kq{潉{~TC#u /Ⱦ&:\`H\~p&P ^=66y ߚpAՔ!xTEݏ#Y So| ``XAu&\@Sځ&piv24ta$}v\< /UXswɽl4wA{–XLk#/7/>弋N" Ddȱ $(/yZ}86#//.xԔOrMMMLhfl='*KDMz`h<񎗺`*K^AVTSbeo/7eoo|Ԉ ?֗;te=MP#:B%ToCIQ8OzzSsLzэyovgC(٨ݎn ]oCI&<ܦhdop'#6fI{*k[},R~7uFT}2D: ?du%Q^{Ug?\+īȎo _꽲6OVdd-lA~z0!5Y49 4iAkp:xInw<sp>odtMrQR o@>'^y넭"ƈ \ %k=峘˘]_t6XkTzd+g6G76q9P;2XtIW9kirFeʗRT t8Vy+=Hx7vq_ҌIi.Ml|R-TuBr+ec88U['4Tj<_vkuR]>>PcJٛj"IRQ闠cNUU<>J?|gɖg)G>>}hΰ66r? Cud׸дps$5="G1 esSV-Vo؃rItqFzM ٚ%1wox鍆kW-H|+*;;p?jGq*;Scןb7}IvͮQG-Om;rjwYbxXC5N^.i#Ya ΢I@0 IU wAj^'tKCYe%ѳ399׵]>}s.>>_} z"& KlbcHkAEըX(*K:͏/yK~H8y$<h]8 _\qr&!Td,eе:{`pQܰW Z6Q?s?ӥTZ9yj$]A /uxaAѻ:YwuhNފ1U,רGjMU.eXY!8v5o!Lޕ0D"=1Å. 8TNX蜖L_c 銚3vB]q/WF0f}r d)|t&F# '["{uoʻQS(wVVfԆ:7+]aɔHJ ȬڤݬtVV7 5tN R˲R-+HѲ/uٚGFLYU6`#f`2I k,9}y%35T1&FkBBmjOjfWPꞫ$̮Ya窃GXGQAqVxǤ[4] _/@xv'N+dޟhc@O5 N_z&>`&A jc/959mΣ'p/3yp3Y4'{b|'/TR_ ba,¥`e\WاSu%/\pR2Z/B^LcWc|7 E]Ure6~U^>}]$9V{tVqv^v[d9 [of1Py!2[jŧyxmo޻ay!J 4_=̜ 1]тE (c^hO.׬q&JlM9f#sդޗi4!jUNrM \D,1^EUW^PJWimfD`U1F&0[:A/SLm<seROԃCvj ;Lk8-YcuFhf7~|tG/Ϻ8L[l`~6q鋡 ^ sj1Pח/-Tl3WCX=`bbh4&d@vr2Es-'/_M3 -_MrVOr@m&x(!5iuoY)a:NI}/uѷR#lW2KR*RTܬtVJ¶js\IVJ Vj;t7+] `^var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004357757015155204114017715 0ustar rootrootMar 14 05:32:24 crc systemd[1]: Starting Kubernetes Kubelet... Mar 14 05:32:24 crc restorecon[4755]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:24 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 05:32:25 crc restorecon[4755]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 14 05:32:26 crc kubenswrapper[4817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 05:32:26 crc kubenswrapper[4817]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 14 05:32:26 crc kubenswrapper[4817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 05:32:26 crc kubenswrapper[4817]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 05:32:26 crc kubenswrapper[4817]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 14 05:32:26 crc kubenswrapper[4817]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.430060 4817 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434771 4817 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434792 4817 feature_gate.go:330] unrecognized feature gate: Example Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434800 4817 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434807 4817 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434815 4817 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434822 4817 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434828 4817 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434835 4817 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434843 4817 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434849 4817 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434855 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434861 4817 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434868 4817 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434875 4817 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434884 4817 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434916 4817 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434923 4817 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434930 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434936 4817 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434943 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434949 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434955 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434961 4817 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434967 4817 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434973 4817 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434979 4817 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434985 4817 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434991 4817 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.434998 4817 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435004 4817 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435010 4817 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435016 4817 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435025 4817 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435034 4817 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435041 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435051 4817 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435060 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435067 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435075 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435083 4817 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435089 4817 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435096 4817 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435102 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435109 4817 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435116 4817 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435122 4817 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435128 4817 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435134 4817 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435143 4817 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435152 4817 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435160 4817 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435166 4817 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435173 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435179 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435187 4817 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435193 4817 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435199 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435205 4817 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435212 4817 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435218 4817 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435224 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435230 4817 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435237 4817 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435243 4817 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435253 4817 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435262 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435269 4817 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435278 4817 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435287 4817 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435294 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.435302 4817 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436097 4817 flags.go:64] FLAG: --address="0.0.0.0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436122 4817 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436137 4817 flags.go:64] FLAG: --anonymous-auth="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436148 4817 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436158 4817 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436166 4817 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436178 4817 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436187 4817 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436195 4817 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436203 4817 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436211 4817 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436219 4817 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436227 4817 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436235 4817 flags.go:64] FLAG: --cgroup-root="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436242 4817 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436250 4817 flags.go:64] FLAG: --client-ca-file="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436257 4817 flags.go:64] FLAG: --cloud-config="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436264 4817 flags.go:64] FLAG: --cloud-provider="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436271 4817 flags.go:64] FLAG: --cluster-dns="[]" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436281 4817 flags.go:64] FLAG: --cluster-domain="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436288 4817 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436296 4817 flags.go:64] FLAG: --config-dir="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436303 4817 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436312 4817 flags.go:64] FLAG: --container-log-max-files="5" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436322 4817 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436330 4817 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436338 4817 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436346 4817 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436353 4817 flags.go:64] FLAG: --contention-profiling="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436362 4817 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436369 4817 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436379 4817 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436388 4817 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436398 4817 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436406 4817 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436414 4817 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436422 4817 flags.go:64] FLAG: --enable-load-reader="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436429 4817 flags.go:64] FLAG: --enable-server="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436437 4817 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436494 4817 flags.go:64] FLAG: --event-burst="100" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436503 4817 flags.go:64] FLAG: --event-qps="50" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436511 4817 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436518 4817 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436526 4817 flags.go:64] FLAG: --eviction-hard="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436536 4817 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436543 4817 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436551 4817 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436558 4817 flags.go:64] FLAG: --eviction-soft="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436565 4817 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436572 4817 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436580 4817 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436588 4817 flags.go:64] FLAG: --experimental-mounter-path="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436595 4817 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436602 4817 flags.go:64] FLAG: --fail-swap-on="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436609 4817 flags.go:64] FLAG: --feature-gates="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436619 4817 flags.go:64] FLAG: --file-check-frequency="20s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436627 4817 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436634 4817 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436643 4817 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436650 4817 flags.go:64] FLAG: --healthz-port="10248" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436658 4817 flags.go:64] FLAG: --help="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436666 4817 flags.go:64] FLAG: --hostname-override="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436673 4817 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436682 4817 flags.go:64] FLAG: --http-check-frequency="20s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436690 4817 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436697 4817 flags.go:64] FLAG: --image-credential-provider-config="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436704 4817 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436712 4817 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436721 4817 flags.go:64] FLAG: --image-service-endpoint="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436728 4817 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436736 4817 flags.go:64] FLAG: --kube-api-burst="100" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436743 4817 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436751 4817 flags.go:64] FLAG: --kube-api-qps="50" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436757 4817 flags.go:64] FLAG: --kube-reserved="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436765 4817 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436773 4817 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436780 4817 flags.go:64] FLAG: --kubelet-cgroups="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436787 4817 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436795 4817 flags.go:64] FLAG: --lock-file="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436802 4817 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436811 4817 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436818 4817 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436830 4817 flags.go:64] FLAG: --log-json-split-stream="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436838 4817 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436845 4817 flags.go:64] FLAG: --log-text-split-stream="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436853 4817 flags.go:64] FLAG: --logging-format="text" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436860 4817 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436868 4817 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436876 4817 flags.go:64] FLAG: --manifest-url="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436883 4817 flags.go:64] FLAG: --manifest-url-header="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436912 4817 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436922 4817 flags.go:64] FLAG: --max-open-files="1000000" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436932 4817 flags.go:64] FLAG: --max-pods="110" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436940 4817 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436947 4817 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436955 4817 flags.go:64] FLAG: --memory-manager-policy="None" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436963 4817 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436970 4817 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436978 4817 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.436986 4817 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437005 4817 flags.go:64] FLAG: --node-status-max-images="50" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437012 4817 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437021 4817 flags.go:64] FLAG: --oom-score-adj="-999" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437028 4817 flags.go:64] FLAG: --pod-cidr="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437041 4817 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437054 4817 flags.go:64] FLAG: --pod-manifest-path="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437061 4817 flags.go:64] FLAG: --pod-max-pids="-1" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437068 4817 flags.go:64] FLAG: --pods-per-core="0" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437076 4817 flags.go:64] FLAG: --port="10250" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437084 4817 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437091 4817 flags.go:64] FLAG: --provider-id="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437098 4817 flags.go:64] FLAG: --qos-reserved="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437106 4817 flags.go:64] FLAG: --read-only-port="10255" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437115 4817 flags.go:64] FLAG: --register-node="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437123 4817 flags.go:64] FLAG: --register-schedulable="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437131 4817 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437155 4817 flags.go:64] FLAG: --registry-burst="10" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437163 4817 flags.go:64] FLAG: --registry-qps="5" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437171 4817 flags.go:64] FLAG: --reserved-cpus="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437178 4817 flags.go:64] FLAG: --reserved-memory="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437188 4817 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437196 4817 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437204 4817 flags.go:64] FLAG: --rotate-certificates="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437211 4817 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437219 4817 flags.go:64] FLAG: --runonce="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437226 4817 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437234 4817 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437242 4817 flags.go:64] FLAG: --seccomp-default="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437251 4817 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437260 4817 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437267 4817 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437275 4817 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437284 4817 flags.go:64] FLAG: --storage-driver-password="root" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437292 4817 flags.go:64] FLAG: --storage-driver-secure="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437300 4817 flags.go:64] FLAG: --storage-driver-table="stats" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437308 4817 flags.go:64] FLAG: --storage-driver-user="root" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437316 4817 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437324 4817 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437332 4817 flags.go:64] FLAG: --system-cgroups="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437340 4817 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437354 4817 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437362 4817 flags.go:64] FLAG: --tls-cert-file="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437396 4817 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437407 4817 flags.go:64] FLAG: --tls-min-version="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437415 4817 flags.go:64] FLAG: --tls-private-key-file="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437422 4817 flags.go:64] FLAG: --topology-manager-policy="none" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437430 4817 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437438 4817 flags.go:64] FLAG: --topology-manager-scope="container" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437445 4817 flags.go:64] FLAG: --v="2" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437457 4817 flags.go:64] FLAG: --version="false" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437468 4817 flags.go:64] FLAG: --vmodule="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437477 4817 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.437486 4817 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437682 4817 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437693 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437700 4817 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437708 4817 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437716 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437723 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437730 4817 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437740 4817 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437747 4817 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437754 4817 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437761 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437768 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437776 4817 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437782 4817 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437789 4817 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437796 4817 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437803 4817 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437809 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437816 4817 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437823 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437829 4817 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437837 4817 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437843 4817 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437851 4817 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437858 4817 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437865 4817 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437872 4817 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437878 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437885 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437910 4817 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437917 4817 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437923 4817 feature_gate.go:330] unrecognized feature gate: Example Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437930 4817 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437938 4817 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437946 4817 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437953 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437960 4817 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437966 4817 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437973 4817 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437980 4817 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437987 4817 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.437996 4817 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438004 4817 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438012 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438021 4817 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438027 4817 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438034 4817 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438041 4817 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438047 4817 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438054 4817 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438060 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438066 4817 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438073 4817 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438081 4817 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438089 4817 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438095 4817 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438102 4817 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438109 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438115 4817 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438124 4817 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438130 4817 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438137 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438144 4817 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438150 4817 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438157 4817 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438164 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438170 4817 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438179 4817 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438189 4817 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438196 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.438203 4817 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.438225 4817 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.447354 4817 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.447383 4817 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447489 4817 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447502 4817 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447509 4817 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447516 4817 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447522 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447529 4817 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447536 4817 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447542 4817 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447548 4817 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447555 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447564 4817 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447573 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447580 4817 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447588 4817 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447597 4817 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447603 4817 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447610 4817 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447617 4817 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447623 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447629 4817 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447636 4817 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447642 4817 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447648 4817 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447654 4817 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447661 4817 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447667 4817 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447673 4817 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447679 4817 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447685 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447691 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447697 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447704 4817 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447709 4817 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447716 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447722 4817 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447728 4817 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447734 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447741 4817 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447747 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447753 4817 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447760 4817 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447766 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447774 4817 feature_gate.go:330] unrecognized feature gate: Example Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447780 4817 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447786 4817 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447793 4817 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447799 4817 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447805 4817 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447812 4817 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447818 4817 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447825 4817 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447831 4817 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447838 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447843 4817 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447850 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447856 4817 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447862 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447869 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447876 4817 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447882 4817 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447888 4817 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447922 4817 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447929 4817 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447938 4817 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447947 4817 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447956 4817 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447964 4817 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447972 4817 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447980 4817 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447989 4817 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.447996 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.448007 4817 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448200 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448213 4817 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448219 4817 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448226 4817 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448233 4817 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448239 4817 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448246 4817 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448255 4817 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448264 4817 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448271 4817 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448279 4817 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448287 4817 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448293 4817 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448300 4817 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448308 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448315 4817 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448321 4817 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448327 4817 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448336 4817 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448343 4817 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448350 4817 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448357 4817 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448364 4817 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448371 4817 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448377 4817 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448384 4817 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448390 4817 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448396 4817 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448402 4817 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448409 4817 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448416 4817 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448422 4817 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448428 4817 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448434 4817 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448441 4817 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448447 4817 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448454 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448462 4817 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448471 4817 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448478 4817 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448485 4817 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448492 4817 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448498 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448504 4817 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448510 4817 feature_gate.go:330] unrecognized feature gate: Example Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448517 4817 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448523 4817 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448530 4817 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448536 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448543 4817 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448551 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448557 4817 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448563 4817 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448569 4817 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448575 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448581 4817 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448587 4817 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448594 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448600 4817 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448608 4817 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448616 4817 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448624 4817 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448633 4817 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448640 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448646 4817 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448653 4817 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448659 4817 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448667 4817 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448673 4817 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448679 4817 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.448685 4817 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.448696 4817 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.448867 4817 server.go:940] "Client rotation is on, will bootstrap in background" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.455017 4817 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.458322 4817 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.458456 4817 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.460514 4817 server.go:997] "Starting client certificate rotation" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.460575 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.460779 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.488527 4817 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.490682 4817 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.497599 4817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.512298 4817 log.go:25] "Validated CRI v1 runtime API" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.548699 4817 log.go:25] "Validated CRI v1 image API" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.550699 4817 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.557349 4817 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-14-05-27-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.557388 4817 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.588963 4817 manager.go:217] Machine: {Timestamp:2026-03-14 05:32:26.570281754 +0000 UTC m=+0.608542520 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7d31aad3-6adc-4cbd-bc39-029dc91df933 BootID:63ac787c-19bc-4f4c-91a6-5792ebe52e66 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:44:94:33 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:44:94:33 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6a:92:62 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f7:c1:a7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1f:90:58 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:96:88:c7 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:9f:29:4c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:29:2b:26:98:5b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:0c:bd:e4:ba:96 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.589249 4817 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.589519 4817 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.590313 4817 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.590721 4817 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.590782 4817 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.591168 4817 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.591188 4817 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.591850 4817 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.591928 4817 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.592839 4817 state_mem.go:36] "Initialized new in-memory state store" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.593008 4817 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.598025 4817 kubelet.go:418] "Attempting to sync node with API server" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.598085 4817 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.598135 4817 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.598163 4817 kubelet.go:324] "Adding apiserver pod source" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.598190 4817 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.605288 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.605373 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.605518 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.605643 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.607097 4817 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.610487 4817 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.621889 4817 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623657 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623712 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623735 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623753 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623782 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623801 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623820 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623848 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623869 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623889 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623944 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.623964 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.652464 4817 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.653380 4817 server.go:1280] "Started kubelet" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.653963 4817 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.653993 4817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.655013 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.655154 4817 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 05:32:26 crc systemd[1]: Started Kubernetes Kubelet. Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.656807 4817 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.656855 4817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.657590 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.657649 4817 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.657681 4817 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.657847 4817 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.658953 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.659219 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.658960 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="200ms" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.660097 4817 server.go:460] "Adding debug handlers to kubelet server" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.660536 4817 factory.go:55] Registering systemd factory Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.663092 4817 factory.go:221] Registration of the systemd container factory successfully Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.663280 4817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.29:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c9e3efd61ad71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,LastTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.666757 4817 factory.go:153] Registering CRI-O factory Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.666830 4817 factory.go:221] Registration of the crio container factory successfully Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.667056 4817 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.667122 4817 factory.go:103] Registering Raw factory Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.667165 4817 manager.go:1196] Started watching for new ooms in manager Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.668524 4817 manager.go:319] Starting recovery of all containers Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.698051 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.698575 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.698705 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.698851 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699030 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699163 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699283 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699403 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699547 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699670 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699788 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.699945 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705267 4817 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705345 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705386 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705411 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705439 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705461 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705483 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705503 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705554 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705577 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705596 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705623 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705643 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705664 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705688 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705723 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705748 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705787 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705817 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705836 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705858 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705878 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705919 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705940 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705959 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705979 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.705998 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706018 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706038 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706057 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706075 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706098 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706118 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706137 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706158 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706179 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706196 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706216 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706235 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706254 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706277 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706312 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706331 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706351 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706370 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706390 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706409 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706428 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706444 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706496 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706521 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706540 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706558 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.706576 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707473 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707511 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707538 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707560 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707580 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707603 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707625 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707508 4817 manager.go:324] Recovery completed Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.707645 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708453 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708500 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708517 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708536 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708552 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708572 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708588 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708606 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708624 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708640 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708658 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708678 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708694 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708711 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708725 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708741 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708756 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708779 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708793 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708807 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708821 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708832 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708846 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708864 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708881 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708915 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708931 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708955 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708971 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.708988 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709009 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709035 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709058 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709077 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709091 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709106 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709124 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709380 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709399 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709415 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709431 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709447 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709460 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709476 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709489 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709551 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709567 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709578 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709589 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709601 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709613 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709633 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709646 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709659 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709671 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709684 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709699 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709712 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709725 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709740 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709753 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709764 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709778 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709793 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709807 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709836 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709850 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709865 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709879 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709906 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709921 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709934 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709947 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709960 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709973 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709986 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.709998 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710011 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710024 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710038 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710050 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710068 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710081 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710095 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710107 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710121 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710134 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710147 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710161 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710178 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710191 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710209 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710223 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710236 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710249 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710261 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710275 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710290 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710303 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710316 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710329 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710343 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710356 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710371 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710389 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710408 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710420 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710432 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710447 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710461 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710474 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710489 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710502 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710515 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710529 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710544 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710558 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710573 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710587 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710601 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710613 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710627 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710641 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710657 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710670 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710683 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710697 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710710 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710722 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710736 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710749 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710763 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710776 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710789 4817 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710802 4817 reconstruct.go:97] "Volume reconstruction finished" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.710812 4817 reconciler.go:26] "Reconciler: start to sync state" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.723982 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.727778 4817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.728029 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.728082 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.728096 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.729755 4817 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.729933 4817 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.730035 4817 state_mem.go:36] "Initialized new in-memory state store" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.730599 4817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.730662 4817 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.730711 4817 kubelet.go:2335] "Starting kubelet main sync loop" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.730947 4817 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 05:32:26 crc kubenswrapper[4817]: W0314 05:32:26.740801 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.740880 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.758348 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.831618 4817 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.859068 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.860842 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="400ms" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.954021 4817 policy_none.go:49] "None policy: Start" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.955604 4817 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 14 05:32:26 crc kubenswrapper[4817]: I0314 05:32:26.955655 4817 state_mem.go:35] "Initializing new in-memory state store" Mar 14 05:32:26 crc kubenswrapper[4817]: E0314 05:32:26.960246 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.032039 4817 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.050610 4817 manager.go:334] "Starting Device Plugin manager" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.051054 4817 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.051086 4817 server.go:79] "Starting device plugin registration server" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.051716 4817 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.051742 4817 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.052605 4817 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.052886 4817 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.052953 4817 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.063447 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.152202 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.153779 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.153831 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.153843 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.153877 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.154559 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.29:6443: connect: connection refused" node="crc" Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.262460 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="800ms" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.355648 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.357436 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.357473 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.357485 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.357515 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.358151 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.29:6443: connect: connection refused" node="crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.432632 4817 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.432768 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.435185 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.435225 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.435237 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.435367 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.435854 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.435982 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.436346 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.436401 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.436424 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.436608 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.436780 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.436849 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.437712 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.437749 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.437761 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.438120 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.438175 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.438218 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.438428 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.438652 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.438739 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.446370 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.446433 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.446456 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.450443 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.450494 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.450506 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.450537 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.450593 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.450946 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.451405 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.451563 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.451623 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.452971 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.452998 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.453009 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.453087 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.453131 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.453197 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.453249 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.453287 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.454455 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.454486 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.454497 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521077 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521130 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521183 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521206 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521232 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521257 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521281 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521531 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521623 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521680 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521734 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521783 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521831 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.521876 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.522015 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.536628 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.536788 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.588387 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.588491 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.623852 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624046 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624178 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624275 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624331 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624369 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624417 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624294 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624456 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624469 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624505 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624588 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624596 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624645 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624709 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624727 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624762 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624776 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624821 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624839 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624866 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624927 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.624962 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625040 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625068 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625093 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625057 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625002 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625152 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.625287 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.656350 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.753584 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.753713 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.758410 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.760180 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.760234 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.760252 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.760286 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:27 crc kubenswrapper[4817]: E0314 05:32:27.760833 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.29:6443: connect: connection refused" node="crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.778182 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.801998 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.820312 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.852318 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: I0314 05:32:27.856768 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.890733 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-52a7a8fb56915ba44984526f5b9ade0f64f7938d11e6b5062e06b231905128db WatchSource:0}: Error finding container 52a7a8fb56915ba44984526f5b9ade0f64f7938d11e6b5062e06b231905128db: Status 404 returned error can't find the container with id 52a7a8fb56915ba44984526f5b9ade0f64f7938d11e6b5062e06b231905128db Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.891545 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9309c388fcae5ee2d587185d2aa2d0ea019e3b280354e3ede5ed90a429cae5b8 WatchSource:0}: Error finding container 9309c388fcae5ee2d587185d2aa2d0ea019e3b280354e3ede5ed90a429cae5b8: Status 404 returned error can't find the container with id 9309c388fcae5ee2d587185d2aa2d0ea019e3b280354e3ede5ed90a429cae5b8 Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.899724 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3e18e61077d32613894023dd6ff557c6ba2ab5ebdb9e0f0b079a285033deed6d WatchSource:0}: Error finding container 3e18e61077d32613894023dd6ff557c6ba2ab5ebdb9e0f0b079a285033deed6d: Status 404 returned error can't find the container with id 3e18e61077d32613894023dd6ff557c6ba2ab5ebdb9e0f0b079a285033deed6d Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.902605 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-fb77f5292b65dd315b081a9f81b14229f342b96211634a2effacdfae90f33ae6 WatchSource:0}: Error finding container fb77f5292b65dd315b081a9f81b14229f342b96211634a2effacdfae90f33ae6: Status 404 returned error can't find the container with id fb77f5292b65dd315b081a9f81b14229f342b96211634a2effacdfae90f33ae6 Mar 14 05:32:27 crc kubenswrapper[4817]: W0314 05:32:27.908179 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-09809f82c947bb630133035f43b9d40889d3f01c21f5d33ed77276794356d359 WatchSource:0}: Error finding container 09809f82c947bb630133035f43b9d40889d3f01c21f5d33ed77276794356d359: Status 404 returned error can't find the container with id 09809f82c947bb630133035f43b9d40889d3f01c21f5d33ed77276794356d359 Mar 14 05:32:28 crc kubenswrapper[4817]: E0314 05:32:28.063558 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="1.6s" Mar 14 05:32:28 crc kubenswrapper[4817]: W0314 05:32:28.200286 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:28 crc kubenswrapper[4817]: E0314 05:32:28.200467 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.521074 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 05:32:28 crc kubenswrapper[4817]: E0314 05:32:28.522444 4817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.561672 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.564416 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.564499 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.564519 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.564563 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:28 crc kubenswrapper[4817]: E0314 05:32:28.565252 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.29:6443: connect: connection refused" node="crc" Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.656162 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.741860 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"09809f82c947bb630133035f43b9d40889d3f01c21f5d33ed77276794356d359"} Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.744933 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb77f5292b65dd315b081a9f81b14229f342b96211634a2effacdfae90f33ae6"} Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.747757 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e18e61077d32613894023dd6ff557c6ba2ab5ebdb9e0f0b079a285033deed6d"} Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.750185 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9309c388fcae5ee2d587185d2aa2d0ea019e3b280354e3ede5ed90a429cae5b8"} Mar 14 05:32:28 crc kubenswrapper[4817]: I0314 05:32:28.752815 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"52a7a8fb56915ba44984526f5b9ade0f64f7938d11e6b5062e06b231905128db"} Mar 14 05:32:29 crc kubenswrapper[4817]: W0314 05:32:29.223360 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:29 crc kubenswrapper[4817]: E0314 05:32:29.223446 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:29 crc kubenswrapper[4817]: W0314 05:32:29.347604 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:29 crc kubenswrapper[4817]: E0314 05:32:29.347696 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.656517 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:29 crc kubenswrapper[4817]: E0314 05:32:29.665083 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="3.2s" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.757758 4817 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="29f8d661514f66b30b2b1ca60d769ada85df1b08099a1cc0ad4c44413dae300a" exitCode=0 Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.757863 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"29f8d661514f66b30b2b1ca60d769ada85df1b08099a1cc0ad4c44413dae300a"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.757985 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.759816 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.759889 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.759941 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.760440 4817 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274" exitCode=0 Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.760563 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.760926 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.762948 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.762975 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.763030 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.765812 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16022c27fbbc60a8c0c33d90ddfded55d22accd6833b1494bf1c538e303d3cff"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.765862 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.765864 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8fa92262f8893709c8562fd1dec5e2da261d5b71b2887ff45ff57f95705b2d65"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.765962 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0baf19f2cd5467c4c27f83ffc6084d9647778a9ffa8734f7785bb160fc3a1edc"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.766001 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2aa90bf900ca4ebcff2100db7311e7c25aaab7e81f941eb51261122084cd704e"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.766855 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.766909 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.766922 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.768602 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce" exitCode=0 Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.768704 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.768731 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.770734 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.770776 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.770793 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.776958 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.777369 4817 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="443b941ef9a36e69b3a896ae1e5f61a8e9345a5880f1d74e378ffb30b82bef4b" exitCode=0 Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.777440 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"443b941ef9a36e69b3a896ae1e5f61a8e9345a5880f1d74e378ffb30b82bef4b"} Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.777501 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.778465 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.778510 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.778527 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.778886 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.778940 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:29 crc kubenswrapper[4817]: I0314 05:32:29.778955 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:29 crc kubenswrapper[4817]: W0314 05:32:29.988409 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:29 crc kubenswrapper[4817]: E0314 05:32:29.988489 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.115600 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.166043 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.167972 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.168024 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.168075 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.168114 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:30 crc kubenswrapper[4817]: E0314 05:32:30.168590 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.29:6443: connect: connection refused" node="crc" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.656728 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:30 crc kubenswrapper[4817]: W0314 05:32:30.661530 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.29:6443: connect: connection refused Mar 14 05:32:30 crc kubenswrapper[4817]: E0314 05:32:30.661665 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.29:6443: connect: connection refused" logger="UnhandledError" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.791968 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.792051 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.792070 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.792090 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.794594 4817 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da6685a0c6fbd105ff5167d779c111dabd641b8dbe1c48d19b5e5940c7ed43e2" exitCode=0 Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.794724 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da6685a0c6fbd105ff5167d779c111dabd641b8dbe1c48d19b5e5940c7ed43e2"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.794775 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.796336 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.796404 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.796422 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.798359 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"72d46f2efaf41f08ebd8b3a002aaed1d57a4377d7281b9b6434cbab6a430005e"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.798503 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.800316 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.800353 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.800371 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.806802 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.807576 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.807988 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.808033 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.808047 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1"} Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.808763 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.808806 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.808821 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.809590 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.809624 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:30 crc kubenswrapper[4817]: I0314 05:32:30.809640 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.818402 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe194f9ff88ac5e3dd7093f23e72ca548b24919a2cef8d160220b851909e5d35"} Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.818521 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.820216 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.820311 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.820335 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.822667 4817 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c859592d7898a22224662fd0d8676f109a55fd4cacf198508e0513017a5112f7" exitCode=0 Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.822842 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.822877 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.822940 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c859592d7898a22224662fd0d8676f109a55fd4cacf198508e0513017a5112f7"} Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.822982 4817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.823054 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.822878 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.824834 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.824952 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.824990 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825069 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825107 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825127 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825173 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825211 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825232 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825081 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825291 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.825320 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:31 crc kubenswrapper[4817]: I0314 05:32:31.916732 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.705952 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829484 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24d352a819f77be1fcc8400c6c86803e22d73790619b045ec62b311a98a18593"} Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829549 4817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829553 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55e2bed7366fff77326cc9277cd012c349f91e7133613684ad78968a3ca16cde"} Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829580 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829583 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829578 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1d67d5d18cf466a822fc142ff6559d76575538205257996760b7689a81639b3"} Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.829767 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"04b0e584cc08da94b6f8200a9b34155abdebf01ad69d33996accbaf1d15a7d56"} Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.830557 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.830573 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.830602 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.830613 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.830616 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:32 crc kubenswrapper[4817]: I0314 05:32:32.830629 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.115968 4817 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.116071 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.285997 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.286233 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.287845 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.287952 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.287974 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.369185 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.371277 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.371357 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.371385 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.371438 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.841169 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"125412cdae49f2599fef2fa451235aa06c2202202ec53f3da4d7471edcd4916a"} Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.841407 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.843210 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.843288 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:33 crc kubenswrapper[4817]: I0314 05:32:33.843309 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.487188 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.682336 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.682594 4817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.682675 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.684629 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.684703 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.684722 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.844332 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.849926 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.849995 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.850016 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.938759 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.939059 4817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.939113 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.941188 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.941232 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:34 crc kubenswrapper[4817]: I0314 05:32:34.941245 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:35 crc kubenswrapper[4817]: I0314 05:32:35.847768 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:35 crc kubenswrapper[4817]: I0314 05:32:35.849505 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:35 crc kubenswrapper[4817]: I0314 05:32:35.849567 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:35 crc kubenswrapper[4817]: I0314 05:32:35.849586 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:35 crc kubenswrapper[4817]: I0314 05:32:35.988042 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.295112 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.295417 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.297067 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.297121 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.297135 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.303994 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.851086 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.851245 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.853035 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.853035 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.853097 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.853161 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.853202 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:36 crc kubenswrapper[4817]: I0314 05:32:36.853222 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:37 crc kubenswrapper[4817]: E0314 05:32:37.063614 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:32:37 crc kubenswrapper[4817]: I0314 05:32:37.266126 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:37 crc kubenswrapper[4817]: I0314 05:32:37.266402 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:37 crc kubenswrapper[4817]: I0314 05:32:37.268439 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:37 crc kubenswrapper[4817]: I0314 05:32:37.268521 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:37 crc kubenswrapper[4817]: I0314 05:32:37.268540 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.745492 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.745736 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.747346 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.747405 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.747425 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.751688 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.859453 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.861047 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.861215 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:39 crc kubenswrapper[4817]: I0314 05:32:39.861322 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:41 crc kubenswrapper[4817]: I0314 05:32:41.656721 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 14 05:32:41 crc kubenswrapper[4817]: I0314 05:32:41.917129 4817 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.917028 4817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c9e3efd61ad71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,LastTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:32:41 crc kubenswrapper[4817]: I0314 05:32:41.917553 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.918396 4817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:41 crc kubenswrapper[4817]: W0314 05:32:41.918529 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.918776 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.924748 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 14 05:32:41 crc kubenswrapper[4817]: I0314 05:32:41.929155 4817 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 05:32:41 crc kubenswrapper[4817]: I0314 05:32:41.929229 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 05:32:41 crc kubenswrapper[4817]: W0314 05:32:41.930759 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.930879 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.931245 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 05:32:41 crc kubenswrapper[4817]: W0314 05:32:41.931357 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.931432 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:41 crc kubenswrapper[4817]: W0314 05:32:41.934026 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z Mar 14 05:32:41 crc kubenswrapper[4817]: E0314 05:32:41.934129 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.660451 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:42Z is after 2026-02-23T05:33:13Z Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.870114 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.872880 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe194f9ff88ac5e3dd7093f23e72ca548b24919a2cef8d160220b851909e5d35" exitCode=255 Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.873189 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fe194f9ff88ac5e3dd7093f23e72ca548b24919a2cef8d160220b851909e5d35"} Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.873658 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.875351 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.875555 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.875762 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:42 crc kubenswrapper[4817]: I0314 05:32:42.876958 4817 scope.go:117] "RemoveContainer" containerID="fe194f9ff88ac5e3dd7093f23e72ca548b24919a2cef8d160220b851909e5d35" Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.117450 4817 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.117576 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.661837 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:43Z is after 2026-02-23T05:33:13Z Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.877884 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.879130 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13"} Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.879284 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.880364 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.880392 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:43 crc kubenswrapper[4817]: I0314 05:32:43.880402 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.531661 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.532018 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.533786 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.533862 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.533937 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.547112 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.660564 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:44Z is after 2026-02-23T05:33:13Z Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.883732 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.884509 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.887265 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" exitCode=255 Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.887382 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13"} Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.887464 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.887501 4817 scope.go:117] "RemoveContainer" containerID="fe194f9ff88ac5e3dd7093f23e72ca548b24919a2cef8d160220b851909e5d35" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.887662 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.888599 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.888638 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.888648 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.889433 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.889474 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.889489 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.890253 4817 scope.go:117] "RemoveContainer" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" Mar 14 05:32:44 crc kubenswrapper[4817]: E0314 05:32:44.890474 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:32:44 crc kubenswrapper[4817]: I0314 05:32:44.946775 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.662953 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:45Z is after 2026-02-23T05:33:13Z Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.892195 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.894539 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.896309 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.896409 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.896433 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.898237 4817 scope.go:117] "RemoveContainer" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" Mar 14 05:32:45 crc kubenswrapper[4817]: E0314 05:32:45.898627 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:32:45 crc kubenswrapper[4817]: I0314 05:32:45.904273 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:46 crc kubenswrapper[4817]: I0314 05:32:46.660521 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:46Z is after 2026-02-23T05:33:13Z Mar 14 05:32:46 crc kubenswrapper[4817]: I0314 05:32:46.898051 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:46 crc kubenswrapper[4817]: I0314 05:32:46.899840 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:46 crc kubenswrapper[4817]: I0314 05:32:46.900108 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:46 crc kubenswrapper[4817]: I0314 05:32:46.900153 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:46 crc kubenswrapper[4817]: I0314 05:32:46.901312 4817 scope.go:117] "RemoveContainer" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" Mar 14 05:32:46 crc kubenswrapper[4817]: E0314 05:32:46.901684 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:32:47 crc kubenswrapper[4817]: E0314 05:32:47.064000 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.267108 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.661366 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:47Z is after 2026-02-23T05:33:13Z Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.901663 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.903243 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.903303 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.903327 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:47 crc kubenswrapper[4817]: I0314 05:32:47.904377 4817 scope.go:117] "RemoveContainer" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" Mar 14 05:32:47 crc kubenswrapper[4817]: E0314 05:32:47.904736 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:32:48 crc kubenswrapper[4817]: E0314 05:32:48.331150 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:48Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 05:32:48 crc kubenswrapper[4817]: I0314 05:32:48.332150 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:48 crc kubenswrapper[4817]: I0314 05:32:48.334027 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:48 crc kubenswrapper[4817]: I0314 05:32:48.334101 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:48 crc kubenswrapper[4817]: I0314 05:32:48.334123 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:48 crc kubenswrapper[4817]: I0314 05:32:48.334164 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:48 crc kubenswrapper[4817]: E0314 05:32:48.339489 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:48Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 05:32:48 crc kubenswrapper[4817]: I0314 05:32:48.660406 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:48Z is after 2026-02-23T05:33:13Z Mar 14 05:32:49 crc kubenswrapper[4817]: I0314 05:32:49.661574 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:49Z is after 2026-02-23T05:33:13Z Mar 14 05:32:49 crc kubenswrapper[4817]: W0314 05:32:49.846495 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:49Z is after 2026-02-23T05:33:13Z Mar 14 05:32:49 crc kubenswrapper[4817]: E0314 05:32:49.846629 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:50 crc kubenswrapper[4817]: I0314 05:32:50.021675 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 05:32:50 crc kubenswrapper[4817]: E0314 05:32:50.026660 4817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:50 crc kubenswrapper[4817]: I0314 05:32:50.661808 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:50Z is after 2026-02-23T05:33:13Z Mar 14 05:32:51 crc kubenswrapper[4817]: I0314 05:32:51.661661 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:51Z is after 2026-02-23T05:33:13Z Mar 14 05:32:51 crc kubenswrapper[4817]: E0314 05:32:51.924282 4817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:51Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c9e3efd61ad71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,LastTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:32:52 crc kubenswrapper[4817]: I0314 05:32:52.661356 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:52Z is after 2026-02-23T05:33:13Z Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.116853 4817 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.117080 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.117191 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.117443 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.119575 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.119650 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.119677 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.120648 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"2aa90bf900ca4ebcff2100db7311e7c25aaab7e81f941eb51261122084cd704e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.120982 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://2aa90bf900ca4ebcff2100db7311e7c25aaab7e81f941eb51261122084cd704e" gracePeriod=30 Mar 14 05:32:53 crc kubenswrapper[4817]: W0314 05:32:53.174769 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:53Z is after 2026-02-23T05:33:13Z Mar 14 05:32:53 crc kubenswrapper[4817]: E0314 05:32:53.174960 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.663626 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:53Z is after 2026-02-23T05:33:13Z Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.928441 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.929122 4817 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2aa90bf900ca4ebcff2100db7311e7c25aaab7e81f941eb51261122084cd704e" exitCode=255 Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.929194 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2aa90bf900ca4ebcff2100db7311e7c25aaab7e81f941eb51261122084cd704e"} Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.929260 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2a10d3c420d8ba02afe73e26c37342660706936b6949c66530e39bac3454f88a"} Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.929483 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.930863 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.930946 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:53 crc kubenswrapper[4817]: I0314 05:32:53.930967 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:54 crc kubenswrapper[4817]: W0314 05:32:54.283369 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:54Z is after 2026-02-23T05:33:13Z Mar 14 05:32:54 crc kubenswrapper[4817]: E0314 05:32:54.283488 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.308011 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.308345 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.309861 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.309922 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.309932 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.310481 4817 scope.go:117] "RemoveContainer" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" Mar 14 05:32:54 crc kubenswrapper[4817]: W0314 05:32:54.525636 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:54Z is after 2026-02-23T05:33:13Z Mar 14 05:32:54 crc kubenswrapper[4817]: E0314 05:32:54.525762 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.659197 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:54Z is after 2026-02-23T05:33:13Z Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.935754 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.938441 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2"} Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.938694 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.940173 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.940225 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:54 crc kubenswrapper[4817]: I0314 05:32:54.940245 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:55 crc kubenswrapper[4817]: E0314 05:32:55.335152 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:55Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.340183 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.342602 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.342672 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.342693 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.342755 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:32:55 crc kubenswrapper[4817]: E0314 05:32:55.348415 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:55Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.659244 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:55Z is after 2026-02-23T05:33:13Z Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.943299 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.943917 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.946055 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2" exitCode=255 Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.946134 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2"} Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.946210 4817 scope.go:117] "RemoveContainer" containerID="8dd74231c3d692c9c3c3c0fb0c4a085a7bf556d9219a3344cce7b311e30c0e13" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.946500 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.947928 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.947965 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.947974 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:55 crc kubenswrapper[4817]: I0314 05:32:55.948594 4817 scope.go:117] "RemoveContainer" containerID="d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2" Mar 14 05:32:55 crc kubenswrapper[4817]: E0314 05:32:55.948757 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:32:56 crc kubenswrapper[4817]: I0314 05:32:56.660990 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:56Z is after 2026-02-23T05:33:13Z Mar 14 05:32:56 crc kubenswrapper[4817]: I0314 05:32:56.952548 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 05:32:57 crc kubenswrapper[4817]: E0314 05:32:57.064133 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.266999 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.267168 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.268535 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.268579 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.268597 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.269459 4817 scope.go:117] "RemoveContainer" containerID="d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2" Mar 14 05:32:57 crc kubenswrapper[4817]: E0314 05:32:57.269799 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:32:57 crc kubenswrapper[4817]: I0314 05:32:57.660743 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:57Z is after 2026-02-23T05:33:13Z Mar 14 05:32:58 crc kubenswrapper[4817]: I0314 05:32:58.661613 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:58Z is after 2026-02-23T05:33:13Z Mar 14 05:32:59 crc kubenswrapper[4817]: I0314 05:32:59.660806 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:32:59Z is after 2026-02-23T05:33:13Z Mar 14 05:32:59 crc kubenswrapper[4817]: I0314 05:32:59.745170 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:32:59 crc kubenswrapper[4817]: I0314 05:32:59.745364 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:32:59 crc kubenswrapper[4817]: I0314 05:32:59.746707 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:32:59 crc kubenswrapper[4817]: I0314 05:32:59.746782 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:32:59 crc kubenswrapper[4817]: I0314 05:32:59.746803 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:00 crc kubenswrapper[4817]: I0314 05:33:00.116215 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:33:00 crc kubenswrapper[4817]: I0314 05:33:00.116426 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:00 crc kubenswrapper[4817]: I0314 05:33:00.118012 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:00 crc kubenswrapper[4817]: I0314 05:33:00.118070 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:00 crc kubenswrapper[4817]: I0314 05:33:00.118093 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:00 crc kubenswrapper[4817]: I0314 05:33:00.658662 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:00Z is after 2026-02-23T05:33:13Z Mar 14 05:33:01 crc kubenswrapper[4817]: I0314 05:33:01.661179 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:01Z is after 2026-02-23T05:33:13Z Mar 14 05:33:01 crc kubenswrapper[4817]: E0314 05:33:01.931063 4817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c9e3efd61ad71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,LastTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:02 crc kubenswrapper[4817]: E0314 05:33:02.340908 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:02Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 05:33:02 crc kubenswrapper[4817]: I0314 05:33:02.348501 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:02 crc kubenswrapper[4817]: I0314 05:33:02.349738 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:02 crc kubenswrapper[4817]: I0314 05:33:02.349775 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:02 crc kubenswrapper[4817]: I0314 05:33:02.349787 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:02 crc kubenswrapper[4817]: I0314 05:33:02.349816 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:02 crc kubenswrapper[4817]: E0314 05:33:02.353347 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:02Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 05:33:02 crc kubenswrapper[4817]: I0314 05:33:02.661688 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:02Z is after 2026-02-23T05:33:13Z Mar 14 05:33:03 crc kubenswrapper[4817]: W0314 05:33:03.052456 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:03Z is after 2026-02-23T05:33:13Z Mar 14 05:33:03 crc kubenswrapper[4817]: E0314 05:33:03.052593 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:33:03 crc kubenswrapper[4817]: I0314 05:33:03.116471 4817 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:33:03 crc kubenswrapper[4817]: I0314 05:33:03.116563 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 05:33:03 crc kubenswrapper[4817]: I0314 05:33:03.659184 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:03Z is after 2026-02-23T05:33:13Z Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.307739 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.308014 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.309390 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.309513 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.309525 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.310165 4817 scope.go:117] "RemoveContainer" containerID="d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2" Mar 14 05:33:04 crc kubenswrapper[4817]: E0314 05:33:04.310394 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:33:04 crc kubenswrapper[4817]: I0314 05:33:04.658652 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:04Z is after 2026-02-23T05:33:13Z Mar 14 05:33:05 crc kubenswrapper[4817]: I0314 05:33:05.661754 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:05Z is after 2026-02-23T05:33:13Z Mar 14 05:33:06 crc kubenswrapper[4817]: I0314 05:33:06.661685 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:06Z is after 2026-02-23T05:33:13Z Mar 14 05:33:07 crc kubenswrapper[4817]: E0314 05:33:07.064444 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:33:07 crc kubenswrapper[4817]: I0314 05:33:07.627238 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 05:33:07 crc kubenswrapper[4817]: E0314 05:33:07.637283 4817 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 05:33:07 crc kubenswrapper[4817]: E0314 05:33:07.638489 4817 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 14 05:33:07 crc kubenswrapper[4817]: I0314 05:33:07.660500 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:07Z is after 2026-02-23T05:33:13Z Mar 14 05:33:08 crc kubenswrapper[4817]: I0314 05:33:08.659886 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:08Z is after 2026-02-23T05:33:13Z Mar 14 05:33:09 crc kubenswrapper[4817]: E0314 05:33:09.344987 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:09Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 05:33:09 crc kubenswrapper[4817]: I0314 05:33:09.354063 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:09 crc kubenswrapper[4817]: I0314 05:33:09.354947 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:09 crc kubenswrapper[4817]: I0314 05:33:09.354984 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:09 crc kubenswrapper[4817]: I0314 05:33:09.354998 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:09 crc kubenswrapper[4817]: I0314 05:33:09.355022 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:09 crc kubenswrapper[4817]: E0314 05:33:09.359077 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:09Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 05:33:09 crc kubenswrapper[4817]: I0314 05:33:09.659621 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T05:33:09Z is after 2026-02-23T05:33:13Z Mar 14 05:33:10 crc kubenswrapper[4817]: I0314 05:33:10.661213 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:11 crc kubenswrapper[4817]: W0314 05:33:11.476787 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.476865 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 14 05:33:11 crc kubenswrapper[4817]: I0314 05:33:11.661511 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.939019 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3efd61ad71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,LastTimestamp:2026-03-14 05:32:26.653330801 +0000 UTC m=+0.691591587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.950814 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.960002 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.966656 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.973099 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f15e94297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:27.064869527 +0000 UTC m=+1.103130273,LastTimestamp:2026-03-14 05:32:27.064869527 +0000 UTC m=+1.103130273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.976371 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.153816332 +0000 UTC m=+1.192077088,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.982162 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.153838922 +0000 UTC m=+1.192099668,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.989797 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d69e88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:27.153848443 +0000 UTC m=+1.192109189,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:11 crc kubenswrapper[4817]: E0314 05:33:11.995987 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.357460456 +0000 UTC m=+1.395721212,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.002284 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.357481176 +0000 UTC m=+1.395741922,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.008615 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d69e88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:27.357492177 +0000 UTC m=+1.395752923,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.014883 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.435216054 +0000 UTC m=+1.473476810,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.020630 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.435232764 +0000 UTC m=+1.473493520,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.027859 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d69e88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:27.435243775 +0000 UTC m=+1.473504531,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.034448 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.436384939 +0000 UTC m=+1.474645725,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.041254 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.43641599 +0000 UTC m=+1.474676776,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.047501 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d69e88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:27.436436921 +0000 UTC m=+1.474697717,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.054953 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.43773379 +0000 UTC m=+1.475994546,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.061610 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.4377572 +0000 UTC m=+1.476017956,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.067935 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d69e88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:27.437770101 +0000 UTC m=+1.476030857,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.075720 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.438151562 +0000 UTC m=+1.476413178,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.081266 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.438201524 +0000 UTC m=+1.476462370,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.083061 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d69e88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d69e88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.72810356 +0000 UTC m=+0.766364316,LastTimestamp:2026-03-14 05:32:27.438237815 +0000 UTC m=+1.476498601,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.088782 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d623ad\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d623ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728072109 +0000 UTC m=+0.766332865,LastTimestamp:2026-03-14 05:32:27.44641819 +0000 UTC m=+1.484678936,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.090941 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c9e3f01d67097\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c9e3f01d67097 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:26.728091799 +0000 UTC m=+0.766352555,LastTimestamp:2026-03-14 05:32:27.446443181 +0000 UTC m=+1.484703927,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.099790 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3f4798e1aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:27.898462634 +0000 UTC m=+1.936723390,LastTimestamp:2026-03-14 05:32:27.898462634 +0000 UTC m=+1.936723390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.106502 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c9e3f479c57b4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:27.89868946 +0000 UTC m=+1.936950256,LastTimestamp:2026-03-14 05:32:27.89868946 +0000 UTC m=+1.936950256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.112423 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3f47f5e12d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:27.904557357 +0000 UTC m=+1.942818103,LastTimestamp:2026-03-14 05:32:27.904557357 +0000 UTC m=+1.942818103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.118569 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f4806ce3c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:27.90566662 +0000 UTC m=+1.943927366,LastTimestamp:2026-03-14 05:32:27.90566662 +0000 UTC m=+1.943927366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.124450 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3f4a9f7ca2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:27.94922717 +0000 UTC m=+1.987487966,LastTimestamp:2026-03-14 05:32:27.94922717 +0000 UTC m=+1.987487966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.128646 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3f797284de openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.73480931 +0000 UTC m=+2.773070086,LastTimestamp:2026-03-14 05:32:28.73480931 +0000 UTC m=+2.773070086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.135164 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3f79732421 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.734850081 +0000 UTC m=+2.773110837,LastTimestamp:2026-03-14 05:32:28.734850081 +0000 UTC m=+2.773110837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.143406 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3f797606bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.735039167 +0000 UTC m=+2.773299933,LastTimestamp:2026-03-14 05:32:28.735039167 +0000 UTC m=+2.773299933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.147642 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c9e3f7978e6fa openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.735227642 +0000 UTC m=+2.773488388,LastTimestamp:2026-03-14 05:32:28.735227642 +0000 UTC m=+2.773488388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.151708 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f797c21cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.735439309 +0000 UTC m=+2.773700065,LastTimestamp:2026-03-14 05:32:28.735439309 +0000 UTC m=+2.773700065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.157693 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3f7a6f5cc4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.751379652 +0000 UTC m=+2.789640398,LastTimestamp:2026-03-14 05:32:28.751379652 +0000 UTC m=+2.789640398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.164272 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f7a88eb3b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.753054523 +0000 UTC m=+2.791315269,LastTimestamp:2026-03-14 05:32:28.753054523 +0000 UTC m=+2.791315269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.171544 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c9e3f7a8be521 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.753249569 +0000 UTC m=+2.791510315,LastTimestamp:2026-03-14 05:32:28.753249569 +0000 UTC m=+2.791510315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.182063 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3f7a96b880 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.75395904 +0000 UTC m=+2.792219786,LastTimestamp:2026-03-14 05:32:28.75395904 +0000 UTC m=+2.792219786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.187745 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f7aa139af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.754647471 +0000 UTC m=+2.792908217,LastTimestamp:2026-03-14 05:32:28.754647471 +0000 UTC m=+2.792908217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.194255 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3f7aca5337 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.757340983 +0000 UTC m=+2.795601769,LastTimestamp:2026-03-14 05:32:28.757340983 +0000 UTC m=+2.795601769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.199347 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f903a1e2c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.116988972 +0000 UTC m=+3.155249718,LastTimestamp:2026-03-14 05:32:29.116988972 +0000 UTC m=+3.155249718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.206109 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f90f2174e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.129045838 +0000 UTC m=+3.167306594,LastTimestamp:2026-03-14 05:32:29.129045838 +0000 UTC m=+3.167306594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.211586 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f9108ec17 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.130542103 +0000 UTC m=+3.168802849,LastTimestamp:2026-03-14 05:32:29.130542103 +0000 UTC m=+3.168802849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.216970 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f9aa9ba99 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.292075673 +0000 UTC m=+3.330336419,LastTimestamp:2026-03-14 05:32:29.292075673 +0000 UTC m=+3.330336419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.222206 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f9b730ab1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.305268913 +0000 UTC m=+3.343529649,LastTimestamp:2026-03-14 05:32:29.305268913 +0000 UTC m=+3.343529649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.228061 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f9b875756 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.306599254 +0000 UTC m=+3.344860000,LastTimestamp:2026-03-14 05:32:29.306599254 +0000 UTC m=+3.344860000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.234553 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3fa6235ffb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.484597243 +0000 UTC m=+3.522857989,LastTimestamp:2026-03-14 05:32:29.484597243 +0000 UTC m=+3.522857989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.240940 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3fa6f8b597 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.498578327 +0000 UTC m=+3.536839113,LastTimestamp:2026-03-14 05:32:29.498578327 +0000 UTC m=+3.536839113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.247701 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c9e3fb6a75256 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.761679958 +0000 UTC m=+3.799940714,LastTimestamp:2026-03-14 05:32:29.761679958 +0000 UTC m=+3.799940714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.253406 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fb6d65d97 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.764763031 +0000 UTC m=+3.803023777,LastTimestamp:2026-03-14 05:32:29.764763031 +0000 UTC m=+3.803023777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.261847 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fb788fad8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.776468696 +0000 UTC m=+3.814729472,LastTimestamp:2026-03-14 05:32:29.776468696 +0000 UTC m=+3.814729472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.268109 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3fb7c2c051 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.780254801 +0000 UTC m=+3.818515557,LastTimestamp:2026-03-14 05:32:29.780254801 +0000 UTC m=+3.818515557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.273555 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fc538cd76 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.006095222 +0000 UTC m=+4.044355968,LastTimestamp:2026-03-14 05:32:30.006095222 +0000 UTC m=+4.044355968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.284025 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c9e3fc56557b8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.0090142 +0000 UTC m=+4.047274946,LastTimestamp:2026-03-14 05:32:30.0090142 +0000 UTC m=+4.047274946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.287931 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fc566fc47 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.009121863 +0000 UTC m=+4.047382609,LastTimestamp:2026-03-14 05:32:30.009121863 +0000 UTC m=+4.047382609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.294490 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3fc56708d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.009125073 +0000 UTC m=+4.047385819,LastTimestamp:2026-03-14 05:32:30.009125073 +0000 UTC m=+4.047385819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.300945 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fc5d8f84e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.01659195 +0000 UTC m=+4.054852696,LastTimestamp:2026-03-14 05:32:30.01659195 +0000 UTC m=+4.054852696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.307498 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fc5e857e5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.017599461 +0000 UTC m=+4.055860207,LastTimestamp:2026-03-14 05:32:30.017599461 +0000 UTC m=+4.055860207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.312795 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fc63c3a23 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.023096867 +0000 UTC m=+4.061357613,LastTimestamp:2026-03-14 05:32:30.023096867 +0000 UTC m=+4.061357613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.318382 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fc64bb3dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.024111068 +0000 UTC m=+4.062371814,LastTimestamp:2026-03-14 05:32:30.024111068 +0000 UTC m=+4.062371814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.325025 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c9e3fc65041c1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.024409537 +0000 UTC m=+4.062670283,LastTimestamp:2026-03-14 05:32:30.024409537 +0000 UTC m=+4.062670283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.330753 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3fc8326e0a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.056009226 +0000 UTC m=+4.094269972,LastTimestamp:2026-03-14 05:32:30.056009226 +0000 UTC m=+4.094269972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.336116 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fd030b1a9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.190113193 +0000 UTC m=+4.228373939,LastTimestamp:2026-03-14 05:32:30.190113193 +0000 UTC m=+4.228373939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.341008 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fd17964f1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.211654897 +0000 UTC m=+4.249915643,LastTimestamp:2026-03-14 05:32:30.211654897 +0000 UTC m=+4.249915643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.346329 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fd18ca09b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.212915355 +0000 UTC m=+4.251176101,LastTimestamp:2026-03-14 05:32:30.212915355 +0000 UTC m=+4.251176101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.352221 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fd1a0a861 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.214228065 +0000 UTC m=+4.252488811,LastTimestamp:2026-03-14 05:32:30.214228065 +0000 UTC m=+4.252488811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.358600 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fd2db98f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.234867961 +0000 UTC m=+4.273128707,LastTimestamp:2026-03-14 05:32:30.234867961 +0000 UTC m=+4.273128707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.365544 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fd2ed0c9c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.236011676 +0000 UTC m=+4.274272422,LastTimestamp:2026-03-14 05:32:30.236011676 +0000 UTC m=+4.274272422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.372679 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fde723a3a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.429289018 +0000 UTC m=+4.467549764,LastTimestamp:2026-03-14 05:32:30.429289018 +0000 UTC m=+4.467549764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.379965 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fde9861c0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.431789504 +0000 UTC m=+4.470050250,LastTimestamp:2026-03-14 05:32:30.431789504 +0000 UTC m=+4.470050250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.386882 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c9e3fdf9df33c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.448931644 +0000 UTC m=+4.487192380,LastTimestamp:2026-03-14 05:32:30.448931644 +0000 UTC m=+4.487192380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.394140 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fdfd7dd1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.452727069 +0000 UTC m=+4.490987815,LastTimestamp:2026-03-14 05:32:30.452727069 +0000 UTC m=+4.490987815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.401174 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fdfe745b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.45373688 +0000 UTC m=+4.491997626,LastTimestamp:2026-03-14 05:32:30.45373688 +0000 UTC m=+4.491997626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.407432 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fecf82255 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.672945749 +0000 UTC m=+4.711206495,LastTimestamp:2026-03-14 05:32:30.672945749 +0000 UTC m=+4.711206495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.411302 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fedb27f16 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.68515919 +0000 UTC m=+4.723419936,LastTimestamp:2026-03-14 05:32:30.68515919 +0000 UTC m=+4.723419936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.417587 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fedc9b5e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.686680546 +0000 UTC m=+4.724941302,LastTimestamp:2026-03-14 05:32:30.686680546 +0000 UTC m=+4.724941302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.422660 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e3ff48158fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.799378684 +0000 UTC m=+4.837639440,LastTimestamp:2026-03-14 05:32:30.799378684 +0000 UTC m=+4.837639440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.427163 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3ff9769fb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.882561977 +0000 UTC m=+4.920822723,LastTimestamp:2026-03-14 05:32:30.882561977 +0000 UTC m=+4.920822723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.431616 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3ff9f9b9fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.891153918 +0000 UTC m=+4.929414664,LastTimestamp:2026-03-14 05:32:30.891153918 +0000 UTC m=+4.929414664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.435832 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e4000700354 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.999569236 +0000 UTC m=+5.037829982,LastTimestamp:2026-03-14 05:32:30.999569236 +0000 UTC m=+5.037829982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.440013 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e400162e2b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:31.015486128 +0000 UTC m=+5.053746874,LastTimestamp:2026-03-14 05:32:31.015486128 +0000 UTC m=+5.053746874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.447782 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e40320d4834 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:31.831959604 +0000 UTC m=+5.870220380,LastTimestamp:2026-03-14 05:32:31.831959604 +0000 UTC m=+5.870220380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.452452 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e403fd867d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.063375314 +0000 UTC m=+6.101636060,LastTimestamp:2026-03-14 05:32:32.063375314 +0000 UTC m=+6.101636060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.456703 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e4040a3cd5e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.076705118 +0000 UTC m=+6.114965874,LastTimestamp:2026-03-14 05:32:32.076705118 +0000 UTC m=+6.114965874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.461271 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e4040b450af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.077787311 +0000 UTC m=+6.116048067,LastTimestamp:2026-03-14 05:32:32.077787311 +0000 UTC m=+6.116048067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.470973 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e404c31a7e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.270551008 +0000 UTC m=+6.308811754,LastTimestamp:2026-03-14 05:32:32.270551008 +0000 UTC m=+6.308811754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.476967 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e404cebb28e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.282743438 +0000 UTC m=+6.321004184,LastTimestamp:2026-03-14 05:32:32.282743438 +0000 UTC m=+6.321004184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.481211 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e404cff049a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.284009626 +0000 UTC m=+6.322270372,LastTimestamp:2026-03-14 05:32:32.284009626 +0000 UTC m=+6.322270372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.487469 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e405c55759c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.541332892 +0000 UTC m=+6.579593638,LastTimestamp:2026-03-14 05:32:32.541332892 +0000 UTC m=+6.579593638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.491396 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e405d065edf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.552926943 +0000 UTC m=+6.591187699,LastTimestamp:2026-03-14 05:32:32.552926943 +0000 UTC m=+6.591187699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.496471 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e405d177aa8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.554048168 +0000 UTC m=+6.592308914,LastTimestamp:2026-03-14 05:32:32.554048168 +0000 UTC m=+6.592308914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.501350 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e406a822cc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.779144385 +0000 UTC m=+6.817405131,LastTimestamp:2026-03-14 05:32:32.779144385 +0000 UTC m=+6.817405131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.505871 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e406b99599a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.79744041 +0000 UTC m=+6.835701146,LastTimestamp:2026-03-14 05:32:32.79744041 +0000 UTC m=+6.835701146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.510211 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e406bac7794 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:32.798693268 +0000 UTC m=+6.836954014,LastTimestamp:2026-03-14 05:32:32.798693268 +0000 UTC m=+6.836954014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.516797 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e4077f819ac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:33.004976556 +0000 UTC m=+7.043237322,LastTimestamp:2026-03-14 05:32:33.004976556 +0000 UTC m=+7.043237322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.521130 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c9e4078b73061 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:33.017499745 +0000 UTC m=+7.055760511,LastTimestamp:2026-03-14 05:32:33.017499745 +0000 UTC m=+7.055760511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.525150 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 05:33:12 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-controller-manager-crc.189c9e407e96bd52 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 14 05:33:12 crc kubenswrapper[4817]: body: Mar 14 05:33:12 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:33.116036434 +0000 UTC m=+7.154297190,LastTimestamp:2026-03-14 05:32:33.116036434 +0000 UTC m=+7.154297190,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:12 crc kubenswrapper[4817]: > Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.529706 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e407e97ec75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:33.116114037 +0000 UTC m=+7.154374803,LastTimestamp:2026-03-14 05:32:33.116114037 +0000 UTC m=+7.154374803,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.539008 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 05:33:12 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-apiserver-crc.189c9e428b32d700 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 05:33:12 crc kubenswrapper[4817]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 05:33:12 crc kubenswrapper[4817]: Mar 14 05:33:12 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:41.917527808 +0000 UTC m=+15.955788574,LastTimestamp:2026-03-14 05:32:41.917527808 +0000 UTC m=+15.955788574,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:12 crc kubenswrapper[4817]: > Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.544783 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e428b3aa331 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:41.918038833 +0000 UTC m=+15.956299579,LastTimestamp:2026-03-14 05:32:41.918038833 +0000 UTC m=+15.956299579,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.549627 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c9e428b32d700\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 05:33:12 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-apiserver-crc.189c9e428b32d700 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 05:33:12 crc kubenswrapper[4817]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 05:33:12 crc kubenswrapper[4817]: Mar 14 05:33:12 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:41.917527808 +0000 UTC m=+15.955788574,LastTimestamp:2026-03-14 05:32:41.929207821 +0000 UTC m=+15.967468567,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:12 crc kubenswrapper[4817]: > Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.555109 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c9e428b3aa331\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e428b3aa331 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:41.918038833 +0000 UTC m=+15.956299579,LastTimestamp:2026-03-14 05:32:41.929261082 +0000 UTC m=+15.967521828,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.562781 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c9e3fedc9b5e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3fedc9b5e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.686680546 +0000 UTC m=+4.724941302,LastTimestamp:2026-03-14 05:32:42.878526273 +0000 UTC m=+16.916787059,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.567396 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c9e3ff9769fb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3ff9769fb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.882561977 +0000 UTC m=+4.920822723,LastTimestamp:2026-03-14 05:32:43.111718285 +0000 UTC m=+17.149979031,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.571675 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e407e96bd52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 05:33:12 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-controller-manager-crc.189c9e407e96bd52 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 14 05:33:12 crc kubenswrapper[4817]: body: Mar 14 05:33:12 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:33.116036434 +0000 UTC m=+7.154297190,LastTimestamp:2026-03-14 05:32:43.117535625 +0000 UTC m=+17.155796411,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:12 crc kubenswrapper[4817]: > Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.577279 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e407e97ec75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e407e97ec75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:33.116114037 +0000 UTC m=+7.154374803,LastTimestamp:2026-03-14 05:32:43.118613007 +0000 UTC m=+17.156873783,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.583633 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c9e3ff9f9b9fe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c9e3ff9f9b9fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:30.891153918 +0000 UTC m=+4.929414664,LastTimestamp:2026-03-14 05:32:43.12212084 +0000 UTC m=+17.160381586,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.589864 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 05:33:12 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-controller-manager-crc.189c9e4526bdcb88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 05:33:12 crc kubenswrapper[4817]: body: Mar 14 05:33:12 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:53.117037448 +0000 UTC m=+27.155298234,LastTimestamp:2026-03-14 05:32:53.117037448 +0000 UTC m=+27.155298234,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:12 crc kubenswrapper[4817]: > Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.594683 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e4526bf34fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:53.117129981 +0000 UTC m=+27.155390767,LastTimestamp:2026-03-14 05:32:53.117129981 +0000 UTC m=+27.155390767,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.600789 4817 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e4526f98e6b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:53.120953963 +0000 UTC m=+27.159214749,LastTimestamp:2026-03-14 05:32:53.120953963 +0000 UTC m=+27.159214749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.603282 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e3f7aa139af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f7aa139af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:28.754647471 +0000 UTC m=+2.792908217,LastTimestamp:2026-03-14 05:32:53.246342022 +0000 UTC m=+27.284602798,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.607693 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e3f903a1e2c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f903a1e2c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.116988972 +0000 UTC m=+3.155249718,LastTimestamp:2026-03-14 05:32:53.440808187 +0000 UTC m=+27.479068933,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.609441 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e3f90f2174e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e3f90f2174e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:29.129045838 +0000 UTC m=+3.167306594,LastTimestamp:2026-03-14 05:32:53.452403058 +0000 UTC m=+27.490663824,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.616965 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e4526bdcb88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 05:33:12 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-controller-manager-crc.189c9e4526bdcb88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 05:33:12 crc kubenswrapper[4817]: body: Mar 14 05:33:12 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:53.117037448 +0000 UTC m=+27.155298234,LastTimestamp:2026-03-14 05:33:03.116536375 +0000 UTC m=+37.154797161,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:12 crc kubenswrapper[4817]: > Mar 14 05:33:12 crc kubenswrapper[4817]: E0314 05:33:12.622953 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e4526bf34fd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c9e4526bf34fd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:53.117129981 +0000 UTC m=+27.155390767,LastTimestamp:2026-03-14 05:33:03.116600157 +0000 UTC m=+37.154860933,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:33:12 crc kubenswrapper[4817]: I0314 05:33:12.660478 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:13 crc kubenswrapper[4817]: I0314 05:33:13.116502 4817 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:33:13 crc kubenswrapper[4817]: I0314 05:33:13.116611 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 05:33:13 crc kubenswrapper[4817]: E0314 05:33:13.124306 4817 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c9e4526bdcb88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 05:33:13 crc kubenswrapper[4817]: &Event{ObjectMeta:{kube-controller-manager-crc.189c9e4526bdcb88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 05:33:13 crc kubenswrapper[4817]: body: Mar 14 05:33:13 crc kubenswrapper[4817]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:32:53.117037448 +0000 UTC m=+27.155298234,LastTimestamp:2026-03-14 05:33:13.116590065 +0000 UTC m=+47.154850851,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 05:33:13 crc kubenswrapper[4817]: > Mar 14 05:33:13 crc kubenswrapper[4817]: I0314 05:33:13.661656 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:14 crc kubenswrapper[4817]: I0314 05:33:14.661105 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:15 crc kubenswrapper[4817]: W0314 05:33:15.402179 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 14 05:33:15 crc kubenswrapper[4817]: E0314 05:33:15.402261 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 14 05:33:15 crc kubenswrapper[4817]: W0314 05:33:15.640525 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 14 05:33:15 crc kubenswrapper[4817]: E0314 05:33:15.640610 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 14 05:33:15 crc kubenswrapper[4817]: I0314 05:33:15.665005 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:16 crc kubenswrapper[4817]: E0314 05:33:16.354526 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 05:33:16 crc kubenswrapper[4817]: I0314 05:33:16.359991 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:16 crc kubenswrapper[4817]: I0314 05:33:16.362201 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:16 crc kubenswrapper[4817]: I0314 05:33:16.362283 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:16 crc kubenswrapper[4817]: I0314 05:33:16.362310 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:16 crc kubenswrapper[4817]: I0314 05:33:16.362371 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:16 crc kubenswrapper[4817]: E0314 05:33:16.369867 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 05:33:16 crc kubenswrapper[4817]: I0314 05:33:16.664072 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:17 crc kubenswrapper[4817]: E0314 05:33:17.064598 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:33:17 crc kubenswrapper[4817]: I0314 05:33:17.661355 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:17 crc kubenswrapper[4817]: I0314 05:33:17.731983 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:17 crc kubenswrapper[4817]: I0314 05:33:17.734062 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:17 crc kubenswrapper[4817]: I0314 05:33:17.734122 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:17 crc kubenswrapper[4817]: I0314 05:33:17.734141 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:17 crc kubenswrapper[4817]: I0314 05:33:17.735112 4817 scope.go:117] "RemoveContainer" containerID="d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2" Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.023884 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.025746 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06"} Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.026012 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.027347 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.027410 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.027429 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:18 crc kubenswrapper[4817]: I0314 05:33:18.661641 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.033011 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.033754 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.036326 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" exitCode=255 Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.036377 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06"} Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.036429 4817 scope.go:117] "RemoveContainer" containerID="d53a651a485b41bd420cc405d2f26c474042daca2673c3413b88b024631227c2" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.036737 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.044564 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.044633 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.044664 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.046176 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:33:19 crc kubenswrapper[4817]: E0314 05:33:19.046505 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:33:19 crc kubenswrapper[4817]: I0314 05:33:19.662265 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:20 crc kubenswrapper[4817]: I0314 05:33:20.042465 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 05:33:20 crc kubenswrapper[4817]: I0314 05:33:20.661539 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:21 crc kubenswrapper[4817]: I0314 05:33:21.664303 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:21 crc kubenswrapper[4817]: I0314 05:33:21.922452 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 05:33:21 crc kubenswrapper[4817]: I0314 05:33:21.924074 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:21 crc kubenswrapper[4817]: I0314 05:33:21.925820 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:21 crc kubenswrapper[4817]: I0314 05:33:21.925876 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:21 crc kubenswrapper[4817]: I0314 05:33:21.925910 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:22 crc kubenswrapper[4817]: I0314 05:33:22.661447 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.118570 4817 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.118707 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.118805 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.120239 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.121885 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.121942 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.121952 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.122421 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"2a10d3c420d8ba02afe73e26c37342660706936b6949c66530e39bac3454f88a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.122515 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://2a10d3c420d8ba02afe73e26c37342660706936b6949c66530e39bac3454f88a" gracePeriod=30 Mar 14 05:33:23 crc kubenswrapper[4817]: E0314 05:33:23.358880 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.370373 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.372442 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.372487 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.372496 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.372533 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:23 crc kubenswrapper[4817]: E0314 05:33:23.378214 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 05:33:23 crc kubenswrapper[4817]: I0314 05:33:23.661871 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.059321 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.060919 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.061385 4817 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2a10d3c420d8ba02afe73e26c37342660706936b6949c66530e39bac3454f88a" exitCode=255 Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.061484 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2a10d3c420d8ba02afe73e26c37342660706936b6949c66530e39bac3454f88a"} Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.061568 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cfedadaf0c26bf47425bb6613aece983b42cba804bdb3418a6477b014f352a2"} Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.061608 4817 scope.go:117] "RemoveContainer" containerID="2aa90bf900ca4ebcff2100db7311e7c25aaab7e81f941eb51261122084cd704e" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.061755 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.062889 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.062966 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.062984 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.307477 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.307792 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.309710 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.309771 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.309789 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.310642 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:33:24 crc kubenswrapper[4817]: E0314 05:33:24.311016 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:33:24 crc kubenswrapper[4817]: I0314 05:33:24.660012 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:25 crc kubenswrapper[4817]: I0314 05:33:25.066605 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 05:33:25 crc kubenswrapper[4817]: I0314 05:33:25.663080 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:26 crc kubenswrapper[4817]: I0314 05:33:26.664166 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:27 crc kubenswrapper[4817]: E0314 05:33:27.064748 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.267000 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.267663 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.269670 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.269744 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.269764 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.270725 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:33:27 crc kubenswrapper[4817]: E0314 05:33:27.271039 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:33:27 crc kubenswrapper[4817]: I0314 05:33:27.663258 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:28 crc kubenswrapper[4817]: I0314 05:33:28.659546 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:29 crc kubenswrapper[4817]: I0314 05:33:29.663646 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:29 crc kubenswrapper[4817]: I0314 05:33:29.745138 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:33:29 crc kubenswrapper[4817]: I0314 05:33:29.745583 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:29 crc kubenswrapper[4817]: I0314 05:33:29.747037 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:29 crc kubenswrapper[4817]: I0314 05:33:29.747088 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:29 crc kubenswrapper[4817]: I0314 05:33:29.747101 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.115863 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.116093 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.117359 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.117468 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.117550 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.120228 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:33:30 crc kubenswrapper[4817]: E0314 05:33:30.368632 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.378738 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.379967 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.380098 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.380196 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.380298 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:30 crc kubenswrapper[4817]: E0314 05:33:30.385484 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 05:33:30 crc kubenswrapper[4817]: I0314 05:33:30.661286 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:31 crc kubenswrapper[4817]: I0314 05:33:31.083165 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:31 crc kubenswrapper[4817]: I0314 05:33:31.084179 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:31 crc kubenswrapper[4817]: I0314 05:33:31.084241 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:31 crc kubenswrapper[4817]: I0314 05:33:31.084256 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:31 crc kubenswrapper[4817]: I0314 05:33:31.659750 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:32 crc kubenswrapper[4817]: I0314 05:33:32.661254 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:33 crc kubenswrapper[4817]: I0314 05:33:33.660579 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:34 crc kubenswrapper[4817]: I0314 05:33:34.660183 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:35 crc kubenswrapper[4817]: I0314 05:33:35.661513 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:36 crc kubenswrapper[4817]: I0314 05:33:36.661709 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:37 crc kubenswrapper[4817]: E0314 05:33:37.064981 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:33:37 crc kubenswrapper[4817]: E0314 05:33:37.374511 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 05:33:37 crc kubenswrapper[4817]: I0314 05:33:37.386551 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:37 crc kubenswrapper[4817]: I0314 05:33:37.387797 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:37 crc kubenswrapper[4817]: I0314 05:33:37.387863 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:37 crc kubenswrapper[4817]: I0314 05:33:37.387884 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:37 crc kubenswrapper[4817]: I0314 05:33:37.387963 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:37 crc kubenswrapper[4817]: E0314 05:33:37.394355 4817 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 05:33:37 crc kubenswrapper[4817]: I0314 05:33:37.661094 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:38 crc kubenswrapper[4817]: W0314 05:33:38.043044 4817 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 14 05:33:38 crc kubenswrapper[4817]: E0314 05:33:38.043105 4817 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 14 05:33:38 crc kubenswrapper[4817]: I0314 05:33:38.660541 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.640184 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.659295 4817 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.664332 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.731696 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.734082 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.734187 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.734212 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.735273 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:33:39 crc kubenswrapper[4817]: E0314 05:33:39.735597 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.751354 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.751605 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.753373 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.753436 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:39 crc kubenswrapper[4817]: I0314 05:33:39.753459 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:40 crc kubenswrapper[4817]: I0314 05:33:40.663187 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:41 crc kubenswrapper[4817]: I0314 05:33:41.664228 4817 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 05:33:41 crc kubenswrapper[4817]: I0314 05:33:41.918158 4817 csr.go:261] certificate signing request csr-mhvd5 is approved, waiting to be issued Mar 14 05:33:41 crc kubenswrapper[4817]: I0314 05:33:41.932116 4817 csr.go:257] certificate signing request csr-mhvd5 is issued Mar 14 05:33:41 crc kubenswrapper[4817]: I0314 05:33:41.971691 4817 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 14 05:33:42 crc kubenswrapper[4817]: I0314 05:33:42.460637 4817 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 14 05:33:42 crc kubenswrapper[4817]: I0314 05:33:42.933573 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-28 02:24:13.343333056 +0000 UTC Mar 14 05:33:42 crc kubenswrapper[4817]: I0314 05:33:42.933656 4817 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6212h50m30.409685346s for next certificate rotation Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.394664 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.395978 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.396045 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.396059 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.396206 4817 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.406351 4817 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.406696 4817 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.406733 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.410553 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.410607 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.410619 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.410643 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.410655 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:44Z","lastTransitionTime":"2026-03-14T05:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.424153 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.432910 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.432953 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.432967 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.432990 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.433012 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:44Z","lastTransitionTime":"2026-03-14T05:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.443067 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.451066 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.451124 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.451159 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.451188 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.451206 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:44Z","lastTransitionTime":"2026-03-14T05:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.471367 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.482113 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.482179 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.482198 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.482227 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.482248 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:44Z","lastTransitionTime":"2026-03-14T05:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.500615 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.500862 4817 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.500963 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.601872 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.702808 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.731473 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.733520 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.733602 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:44 crc kubenswrapper[4817]: I0314 05:33:44.733633 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.803131 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:44 crc kubenswrapper[4817]: E0314 05:33:44.904020 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.004756 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.105020 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.205908 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: I0314 05:33:45.263450 4817 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.307173 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.408316 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.509617 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.610778 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.711418 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.812011 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:45 crc kubenswrapper[4817]: E0314 05:33:45.912200 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.012700 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.113387 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.214427 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.315017 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.415504 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.516174 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.616861 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.717727 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.818888 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:46 crc kubenswrapper[4817]: E0314 05:33:46.920100 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.021298 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.065803 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.122350 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.222522 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.323956 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.424145 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.525695 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.626442 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.726998 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.828143 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:47 crc kubenswrapper[4817]: E0314 05:33:47.928807 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.029240 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.130592 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.231741 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.332809 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.433851 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.534549 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.634716 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.736034 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.837021 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:48 crc kubenswrapper[4817]: E0314 05:33:48.937821 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.038624 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.139272 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.240413 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.341418 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.442309 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.543307 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.644319 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.745672 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.846816 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:49 crc kubenswrapper[4817]: E0314 05:33:49.947775 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.048751 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.149287 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.250191 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.350973 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.451809 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.552863 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.653324 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.753990 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.854748 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:50 crc kubenswrapper[4817]: E0314 05:33:50.955819 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.056984 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.158203 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.258943 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.359771 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.460636 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.561069 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.661780 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.762806 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.863864 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:51 crc kubenswrapper[4817]: E0314 05:33:51.964424 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.065326 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.166369 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.267482 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.368555 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.469445 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.570447 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.671529 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: I0314 05:33:52.731539 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:52 crc kubenswrapper[4817]: I0314 05:33:52.732736 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:52 crc kubenswrapper[4817]: I0314 05:33:52.732790 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:52 crc kubenswrapper[4817]: I0314 05:33:52.732799 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:52 crc kubenswrapper[4817]: I0314 05:33:52.733630 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.733853 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.771981 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.873036 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:52 crc kubenswrapper[4817]: E0314 05:33:52.973516 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.074198 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.174562 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.275711 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.376708 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.477699 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.578833 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.679600 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.780793 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.881818 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:53 crc kubenswrapper[4817]: E0314 05:33:53.982866 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.083433 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.183722 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.284262 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.384428 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.485316 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.585518 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.686496 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.689517 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.695825 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.695944 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.695973 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.696011 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.696041 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:54Z","lastTransitionTime":"2026-03-14T05:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.713044 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.718701 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.718761 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.718780 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.718806 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.718824 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:54Z","lastTransitionTime":"2026-03-14T05:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.732043 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.736047 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.736109 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.736124 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.736146 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.736162 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:54Z","lastTransitionTime":"2026-03-14T05:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.746529 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.750765 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.750806 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.750817 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.750833 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:33:54 crc kubenswrapper[4817]: I0314 05:33:54.750845 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:33:54Z","lastTransitionTime":"2026-03-14T05:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.765284 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.765442 4817 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.787462 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.888120 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:54 crc kubenswrapper[4817]: E0314 05:33:54.988680 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.089759 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.190570 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.291567 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.392672 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.493679 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.595007 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.695781 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: I0314 05:33:55.723776 4817 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 05:33:55 crc kubenswrapper[4817]: I0314 05:33:55.731709 4817 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 05:33:55 crc kubenswrapper[4817]: I0314 05:33:55.732975 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:33:55 crc kubenswrapper[4817]: I0314 05:33:55.733015 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:33:55 crc kubenswrapper[4817]: I0314 05:33:55.733024 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.796569 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.897000 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:55 crc kubenswrapper[4817]: E0314 05:33:55.998145 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.098858 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.199968 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.300956 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.401827 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.502373 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.603289 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.703498 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.804003 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:56 crc kubenswrapper[4817]: E0314 05:33:56.904757 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.005689 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.066030 4817 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.106484 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.207442 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.308445 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.409350 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.509829 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.610544 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.712025 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.812197 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:57 crc kubenswrapper[4817]: E0314 05:33:57.912656 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.013726 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.113995 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.215304 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.315670 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.416072 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.516496 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.616958 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.717982 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.818580 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:58 crc kubenswrapper[4817]: E0314 05:33:58.919752 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.020662 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.121846 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.222869 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.323780 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.424189 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.524720 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.626222 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.726962 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.827570 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:33:59 crc kubenswrapper[4817]: E0314 05:33:59.928539 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.029759 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.130280 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.230746 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.331946 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.433100 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.533634 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.633724 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.734805 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.835711 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:00 crc kubenswrapper[4817]: E0314 05:34:00.936049 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.037214 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.137584 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.237732 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.338331 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.438666 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.539210 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.640495 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.741206 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.842010 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:01 crc kubenswrapper[4817]: E0314 05:34:01.942780 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.042928 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.144061 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.244402 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.345242 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.446024 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.546710 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.647868 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.748809 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.849649 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:02 crc kubenswrapper[4817]: E0314 05:34:02.950782 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.050910 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.151343 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.252488 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.353576 4817 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.363697 4817 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.455644 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.455712 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.455725 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.455748 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.455764 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:03Z","lastTransitionTime":"2026-03-14T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.558813 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.558855 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.558863 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.558878 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.558888 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:03Z","lastTransitionTime":"2026-03-14T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.660246 4817 apiserver.go:52] "Watching apiserver" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.661486 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.661526 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.661538 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.661556 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.661568 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:03Z","lastTransitionTime":"2026-03-14T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.666476 4817 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.666721 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.667104 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.667227 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.667304 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.667345 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.667356 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.667265 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.667427 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.667570 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.667628 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.669766 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.669970 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.670029 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.670268 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.670372 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.670453 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.670570 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.670627 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.671544 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.694812 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.706244 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.716932 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.725669 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.735053 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.744277 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.752786 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.758948 4817 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.760552 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.764019 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.764057 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.764070 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.764085 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.764104 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:03Z","lastTransitionTime":"2026-03-14T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.771236 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812467 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812525 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812547 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812572 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812594 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812618 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812641 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812664 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812689 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812710 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812733 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812753 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812776 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812796 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812816 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812836 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812858 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812880 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812917 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812942 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812969 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812990 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.812971 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813011 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813034 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813058 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813082 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813106 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813130 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813154 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813177 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813200 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813222 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813243 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813263 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813281 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813316 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813338 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813359 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813378 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813396 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813416 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813438 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813457 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813475 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813497 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813518 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813537 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813560 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813583 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813604 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813626 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813647 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813670 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813692 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813712 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813735 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813759 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813782 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813806 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813828 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813850 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813873 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813915 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813938 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813958 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813977 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814002 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814023 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814043 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814065 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814087 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814110 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814134 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814156 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814178 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814203 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814227 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814249 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814273 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814293 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814312 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814336 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814358 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814381 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814405 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814428 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814453 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815147 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815186 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815203 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815227 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815248 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815269 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815292 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815313 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815336 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815353 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815392 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815413 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815430 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815453 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815474 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815497 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815514 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815545 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815570 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815587 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815616 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815638 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815780 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815800 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815926 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816100 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816125 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816155 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816185 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816208 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816232 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816254 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816274 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816542 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816633 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816657 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816675 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816703 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816741 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816816 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816843 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816876 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816929 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813148 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.816956 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813269 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813478 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813721 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813759 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.813998 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817019 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814019 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814058 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814163 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817057 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817092 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817150 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817190 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817359 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817387 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817420 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817456 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817487 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817680 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817734 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817822 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817857 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817951 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818088 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818125 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818165 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818234 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818270 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818302 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818359 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818439 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818554 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818587 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818637 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818683 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818710 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818740 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818772 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818839 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818867 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818916 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818955 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819041 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819077 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819169 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819229 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819250 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819277 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819305 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819408 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819570 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819607 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819634 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819841 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819956 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820227 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820298 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820339 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820575 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820642 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814337 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821057 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821142 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814357 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.814370 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.815620 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817130 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820870 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817128 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817529 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817709 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.817806 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818010 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818157 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818494 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818681 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818963 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.818986 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819073 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819111 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819299 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819270 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819381 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819612 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819783 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.819860 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820124 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820456 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820690 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.820911 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821016 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821474 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821052 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821633 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821592 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821260 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821885 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821942 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.821975 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.822097 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.822180 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.822628 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.822970 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.823613 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.823745 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.824007 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.824015 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.824178 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.824741 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.824861 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.824935 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825082 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825131 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825344 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825248 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825268 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825551 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825581 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825622 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825977 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.825989 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826109 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826016 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826184 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826293 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826385 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826540 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826574 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826858 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826957 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826966 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826991 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.826992 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.827490 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.827604 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.827653 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.827933 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828039 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828603 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828689 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828778 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828812 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828862 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.828927 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829060 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829143 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829283 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829366 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829458 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829606 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.829765 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830006 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830152 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830298 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830548 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830564 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830559 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830610 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.830709 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831095 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831239 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831346 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831671 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831764 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831870 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.831929 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832029 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832200 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832226 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832264 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832314 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832310 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832411 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832589 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.832720 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:04.332599318 +0000 UTC m=+98.370860064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.832736 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.833463 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.833524 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.833782 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834111 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834118 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834332 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834358 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834389 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834383 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834492 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834510 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834545 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834572 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834597 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834623 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834626 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834654 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834684 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834765 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834793 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834813 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834835 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834934 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834957 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.834984 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835002 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835026 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835046 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835101 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835118 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835134 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835153 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835245 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835291 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835304 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835313 4817 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835322 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835331 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835340 4817 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835348 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835359 4817 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835361 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835367 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835376 4817 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835384 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835393 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835402 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835411 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835420 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835429 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835440 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835449 4817 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835457 4817 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835467 4817 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835478 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835486 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835496 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835505 4817 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835513 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835521 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835531 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835539 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835548 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835557 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835565 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835575 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835583 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835592 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835602 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835611 4817 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835620 4817 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835629 4817 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835638 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835647 4817 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835656 4817 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835664 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835673 4817 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835684 4817 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835692 4817 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835701 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835710 4817 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835719 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835728 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835737 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835746 4817 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835755 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835764 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835772 4817 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835780 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835790 4817 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835799 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835808 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835817 4817 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835825 4817 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835834 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835843 4817 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835855 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835867 4817 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835879 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835902 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835915 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835936 4817 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835948 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835956 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835964 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835973 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835983 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835992 4817 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836003 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836012 4817 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836022 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836031 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836040 4817 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836049 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836058 4817 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836075 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836084 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836093 4817 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836102 4817 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836111 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836120 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836128 4817 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836137 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836150 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836159 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836168 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836176 4817 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836184 4817 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836193 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836201 4817 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836210 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836219 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836227 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836235 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836243 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836251 4817 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836262 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836273 4817 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836283 4817 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836296 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836307 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836318 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836328 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836336 4817 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835637 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835906 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836359 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836373 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836384 4817 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836393 4817 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836402 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836411 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836421 4817 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836431 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836440 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836449 4817 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836457 4817 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836512 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836523 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836531 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836540 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836550 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836558 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836567 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836579 4817 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836589 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836598 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.835969 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.836012 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.838340 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.838382 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.839327 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.840299 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.840355 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.840636 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.840741 4817 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.840788 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:04.340744074 +0000 UTC m=+98.379004890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.842813 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.843581 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.844062 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.844140 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.844208 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.847506 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.848084 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.848231 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.848338 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:04.348235951 +0000 UTC m=+98.386496697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.849433 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.849465 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.849480 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.849642 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:04.349619091 +0000 UTC m=+98.387879917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.850095 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.850340 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.850767 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.850930 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851024 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851075 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851043 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851278 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851332 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851622 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851757 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.851937 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.855697 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.855855 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.855878 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.855907 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.855955 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:04.355936994 +0000 UTC m=+98.394197740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.856566 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.857158 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.857338 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.857399 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.857398 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.857431 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.857972 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.858733 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.861096 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.861171 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.861214 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.861258 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.861512 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.862011 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863244 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863385 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863442 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863499 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863608 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863673 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863715 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863774 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.863778 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.866103 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.866131 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.866142 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.866161 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.866174 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:03Z","lastTransitionTime":"2026-03-14T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.870794 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.870841 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.870865 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.870927 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.870956 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.871303 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.871308 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.871321 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.871383 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.871641 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.871721 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.872122 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.874130 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.874472 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.884776 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.889188 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.891950 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938040 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938133 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938177 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938193 4817 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938205 4817 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938218 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938232 4817 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938244 4817 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938257 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938268 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938279 4817 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938290 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938301 4817 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938313 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938325 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938340 4817 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938352 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938363 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938375 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938385 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938397 4817 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938389 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938408 4817 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938476 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938496 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938511 4817 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938524 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938538 4817 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938551 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938200 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938564 4817 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938615 4817 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938628 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938641 4817 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938654 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938667 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938679 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938691 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938701 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938714 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938726 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938738 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938751 4817 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938761 4817 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938772 4817 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938786 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938796 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938806 4817 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938822 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938833 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938845 4817 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938858 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938871 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938888 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938919 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938932 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938944 4817 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938955 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938967 4817 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938978 4817 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.938989 4817 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.939001 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.939012 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.939025 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.939037 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.939049 4817 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.939060 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.968889 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.969188 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.969254 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.969319 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.969377 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:03Z","lastTransitionTime":"2026-03-14T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.984034 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 05:34:03 crc kubenswrapper[4817]: I0314 05:34:03.993011 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.997310 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:03 crc kubenswrapper[4817]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:03 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:03 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:03 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:03 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:03 crc kubenswrapper[4817]: fi Mar 14 05:34:03 crc kubenswrapper[4817]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 05:34:03 crc kubenswrapper[4817]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 05:34:03 crc kubenswrapper[4817]: ho_enable="--enable-hybrid-overlay" Mar 14 05:34:03 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 05:34:03 crc kubenswrapper[4817]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 05:34:03 crc kubenswrapper[4817]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 05:34:03 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:03 crc kubenswrapper[4817]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 05:34:03 crc kubenswrapper[4817]: --webhook-host=127.0.0.1 \ Mar 14 05:34:03 crc kubenswrapper[4817]: --webhook-port=9743 \ Mar 14 05:34:03 crc kubenswrapper[4817]: ${ho_enable} \ Mar 14 05:34:03 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:03 crc kubenswrapper[4817]: --disable-approver \ Mar 14 05:34:03 crc kubenswrapper[4817]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 05:34:03 crc kubenswrapper[4817]: --wait-for-kubernetes-api=200s \ Mar 14 05:34:03 crc kubenswrapper[4817]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 05:34:03 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:03 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:03 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:03 crc kubenswrapper[4817]: E0314 05:34:03.999681 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:03 crc kubenswrapper[4817]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:03 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:03 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:03 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:03 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:03 crc kubenswrapper[4817]: fi Mar 14 05:34:03 crc kubenswrapper[4817]: Mar 14 05:34:03 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 05:34:03 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:03 crc kubenswrapper[4817]: --disable-webhook \ Mar 14 05:34:03 crc kubenswrapper[4817]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 05:34:03 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:03 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:03 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.001469 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.001848 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 05:34:04 crc kubenswrapper[4817]: W0314 05:34:04.005854 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3a36b77f303f085e3224d6e4aad3927ca545e1efad0121d2ca6c644c7f52ee08 WatchSource:0}: Error finding container 3a36b77f303f085e3224d6e4aad3927ca545e1efad0121d2ca6c644c7f52ee08: Status 404 returned error can't find the container with id 3a36b77f303f085e3224d6e4aad3927ca545e1efad0121d2ca6c644c7f52ee08 Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.009009 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.010265 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 05:34:04 crc kubenswrapper[4817]: W0314 05:34:04.011498 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0d2f3abc8ef7848df03f0092c3a928f3b229464563a11625945f559ec5d5488e WatchSource:0}: Error finding container 0d2f3abc8ef7848df03f0092c3a928f3b229464563a11625945f559ec5d5488e: Status 404 returned error can't find the container with id 0d2f3abc8ef7848df03f0092c3a928f3b229464563a11625945f559ec5d5488e Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.013389 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:04 crc kubenswrapper[4817]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:04 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:04 crc kubenswrapper[4817]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 05:34:04 crc kubenswrapper[4817]: source /etc/kubernetes/apiserver-url.env Mar 14 05:34:04 crc kubenswrapper[4817]: else Mar 14 05:34:04 crc kubenswrapper[4817]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 05:34:04 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:04 crc kubenswrapper[4817]: fi Mar 14 05:34:04 crc kubenswrapper[4817]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 05:34:04 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:04 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.014885 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.071938 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.071976 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.071984 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.071998 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.072007 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.173144 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3a36b77f303f085e3224d6e4aad3927ca545e1efad0121d2ca6c644c7f52ee08"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.174122 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.174144 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b5a42155b5556950b4620b5177bd7f64a0ba957884831f09e365dc528cf2fc33"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.174177 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.174195 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.174215 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.174230 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.174915 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.175288 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:04 crc kubenswrapper[4817]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:04 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:04 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:04 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:04 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:04 crc kubenswrapper[4817]: fi Mar 14 05:34:04 crc kubenswrapper[4817]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 05:34:04 crc kubenswrapper[4817]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 05:34:04 crc kubenswrapper[4817]: ho_enable="--enable-hybrid-overlay" Mar 14 05:34:04 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 05:34:04 crc kubenswrapper[4817]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 05:34:04 crc kubenswrapper[4817]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 05:34:04 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:04 crc kubenswrapper[4817]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 05:34:04 crc kubenswrapper[4817]: --webhook-host=127.0.0.1 \ Mar 14 05:34:04 crc kubenswrapper[4817]: --webhook-port=9743 \ Mar 14 05:34:04 crc kubenswrapper[4817]: ${ho_enable} \ Mar 14 05:34:04 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:04 crc kubenswrapper[4817]: --disable-approver \ Mar 14 05:34:04 crc kubenswrapper[4817]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 05:34:04 crc kubenswrapper[4817]: --wait-for-kubernetes-api=200s \ Mar 14 05:34:04 crc kubenswrapper[4817]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 05:34:04 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:04 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:04 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.175414 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0d2f3abc8ef7848df03f0092c3a928f3b229464563a11625945f559ec5d5488e"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.176612 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.177620 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:04 crc kubenswrapper[4817]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:04 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:04 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:04 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:04 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:04 crc kubenswrapper[4817]: fi Mar 14 05:34:04 crc kubenswrapper[4817]: Mar 14 05:34:04 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 05:34:04 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:04 crc kubenswrapper[4817]: --disable-webhook \ Mar 14 05:34:04 crc kubenswrapper[4817]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 05:34:04 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:04 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:04 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.177740 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:04 crc kubenswrapper[4817]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:04 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:04 crc kubenswrapper[4817]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 05:34:04 crc kubenswrapper[4817]: source /etc/kubernetes/apiserver-url.env Mar 14 05:34:04 crc kubenswrapper[4817]: else Mar 14 05:34:04 crc kubenswrapper[4817]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 05:34:04 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:04 crc kubenswrapper[4817]: fi Mar 14 05:34:04 crc kubenswrapper[4817]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 05:34:04 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:04 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.178801 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.178845 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.186284 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.197153 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.205225 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.215975 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.224099 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.235221 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.245032 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.257508 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.269197 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.276919 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.276960 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.276969 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.276985 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.276996 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.281109 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.290636 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.300683 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.342309 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.342496 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.343126 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:05.343099024 +0000 UTC m=+99.381359770 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.343209 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.343312 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:05.34328869 +0000 UTC m=+99.381549446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.380620 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.380665 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.380673 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.380689 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.380699 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.443190 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.443258 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.443299 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443394 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443425 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443464 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443482 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443508 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:05.443485092 +0000 UTC m=+99.481745838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443423 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443552 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:05.443526683 +0000 UTC m=+99.481787469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443557 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443569 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.443599 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:05.443589705 +0000 UTC m=+99.481850451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.484106 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.484240 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.484267 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.484299 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.484323 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.587161 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.587212 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.587222 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.587237 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.587251 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.689541 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.689600 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.689609 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.689623 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.689633 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.735092 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.735858 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.737588 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.738504 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.739832 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.740632 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.741464 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.742850 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.743769 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.745148 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.745843 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.747365 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.748065 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.748823 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.750153 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.750879 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.752277 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.752857 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.753888 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.755285 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.755960 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.757420 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.758322 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.759799 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.760544 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.761570 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.763141 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.763786 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.765432 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.766113 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.767481 4817 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.767642 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.769979 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.771291 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.771955 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.774028 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.774880 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.776154 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.777116 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.778653 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.779354 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.781299 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.782385 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.783977 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.786070 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.787337 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.789484 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.791201 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.792450 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.792497 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.792810 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.792837 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.793076 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.793136 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.794012 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.796281 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.797456 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.798810 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.800633 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.868923 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.868987 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.869006 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.869032 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.869050 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.883968 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.888061 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.888128 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.888141 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.888160 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.888201 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.906931 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.910729 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.910756 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.910768 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.910783 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.910794 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.923581 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.927267 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.927294 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.927305 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.927319 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.927330 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.941648 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.945422 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.945451 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.945461 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.945478 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.945490 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.959759 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:04 crc kubenswrapper[4817]: E0314 05:34:04.960182 4817 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.961875 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.961964 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.961984 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.962009 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:04 crc kubenswrapper[4817]: I0314 05:34:04.962026 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:04Z","lastTransitionTime":"2026-03-14T05:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.063559 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.063604 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.063614 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.063632 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.063644 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.166077 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.166127 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.166139 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.166155 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.166165 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.268024 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.268072 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.268083 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.268098 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.268109 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.350640 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.350724 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.350819 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.350832 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:07.350790483 +0000 UTC m=+101.389051229 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.350873 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:07.350865875 +0000 UTC m=+101.389126621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.369627 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.369658 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.369667 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.369681 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.369691 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.451400 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.451517 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.451558 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451639 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451653 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451672 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451687 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451702 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:07.451682055 +0000 UTC m=+101.489942801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451720 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:07.451710136 +0000 UTC m=+101.489970882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451783 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451853 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451870 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.451964 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:07.451939123 +0000 UTC m=+101.490199929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.471743 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.471813 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.471824 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.471841 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.471851 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.574749 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.574807 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.574816 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.574834 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.574846 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.676832 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.676876 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.676908 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.676928 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.676943 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.731556 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.731656 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.731767 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.731976 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.732145 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:05 crc kubenswrapper[4817]: E0314 05:34:05.732316 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.779298 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.779348 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.779361 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.779378 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.779388 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.881968 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.882036 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.882047 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.882066 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.882079 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.985624 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.985716 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.986163 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.986236 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:05 crc kubenswrapper[4817]: I0314 05:34:05.986258 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:05Z","lastTransitionTime":"2026-03-14T05:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.090928 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.091003 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.091021 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.091052 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.091072 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.193923 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.194009 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.194033 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.194065 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.194088 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.297981 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.298064 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.298088 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.298118 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.298140 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.400208 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.400327 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.400350 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.400418 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.400451 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.502278 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.502322 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.502333 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.502351 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.502362 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.608437 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.608476 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.608484 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.608497 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.608519 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.711390 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.711553 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.711573 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.711600 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.711616 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.745551 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.749259 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.749643 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.760386 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.774658 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.788885 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.802942 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.814323 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.814843 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.814883 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.814896 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.814926 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.814935 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.917254 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.917294 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.917304 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.917319 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:06 crc kubenswrapper[4817]: I0314 05:34:06.917329 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:06Z","lastTransitionTime":"2026-03-14T05:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.019966 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.020020 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.020034 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.020054 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.020091 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.122430 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.122467 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.122475 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.122509 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.122521 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.184023 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.185503 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.185846 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.197844 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.215010 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.225725 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.225774 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.225786 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.225808 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.225822 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.230329 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.240561 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.253321 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.266462 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.279222 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.327828 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.327891 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.327922 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.327947 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.327961 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.371624 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.371803 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:11.371765902 +0000 UTC m=+105.410026688 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.371949 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.372166 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.372298 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:11.372278197 +0000 UTC m=+105.410538933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.430680 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.431008 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.431108 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.431183 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.431248 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.472828 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.473029 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.473058 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.472997 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473134 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473146 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473203 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:11.47318578 +0000 UTC m=+105.511446526 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473116 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473469 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:11.473438597 +0000 UTC m=+105.511699343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473341 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473627 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473707 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.473848 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:11.473838819 +0000 UTC m=+105.512099565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.534235 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.534289 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.534300 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.534317 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.534328 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.636215 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.636449 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.636513 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.636574 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.636634 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.731623 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.731783 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.731636 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.732301 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.732415 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:07 crc kubenswrapper[4817]: E0314 05:34:07.732306 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.738834 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.739051 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.739142 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.739216 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.739291 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.842878 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.842943 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.842955 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.842994 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.843007 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.865966 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-v2gnk"] Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.866338 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.868965 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.869516 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.870386 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.900042 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.914467 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.926896 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.936415 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.945239 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.945281 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.945291 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.945312 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.945324 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:07Z","lastTransitionTime":"2026-03-14T05:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.949968 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.960681 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.971465 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.978342 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7626bd0c-9420-4e61-98b0-00e4c9eb21f2-hosts-file\") pod \"node-resolver-v2gnk\" (UID: \"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\") " pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.978374 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlks\" (UniqueName: \"kubernetes.io/projected/7626bd0c-9420-4e61-98b0-00e4c9eb21f2-kube-api-access-lvlks\") pod \"node-resolver-v2gnk\" (UID: \"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\") " pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:07 crc kubenswrapper[4817]: I0314 05:34:07.980697 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.047973 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.048008 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.048016 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.048031 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.048040 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.079435 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7626bd0c-9420-4e61-98b0-00e4c9eb21f2-hosts-file\") pod \"node-resolver-v2gnk\" (UID: \"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\") " pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.079573 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7626bd0c-9420-4e61-98b0-00e4c9eb21f2-hosts-file\") pod \"node-resolver-v2gnk\" (UID: \"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\") " pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.079688 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlks\" (UniqueName: \"kubernetes.io/projected/7626bd0c-9420-4e61-98b0-00e4c9eb21f2-kube-api-access-lvlks\") pod \"node-resolver-v2gnk\" (UID: \"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\") " pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.096770 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlks\" (UniqueName: \"kubernetes.io/projected/7626bd0c-9420-4e61-98b0-00e4c9eb21f2-kube-api-access-lvlks\") pod \"node-resolver-v2gnk\" (UID: \"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\") " pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.151163 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.151206 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.151224 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.151244 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.151257 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.185311 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v2gnk" Mar 14 05:34:08 crc kubenswrapper[4817]: W0314 05:34:08.195960 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7626bd0c_9420_4e61_98b0_00e4c9eb21f2.slice/crio-45510b373cd06591d0db0bb08b60f43c77e83e888a9863298183401034a487a0 WatchSource:0}: Error finding container 45510b373cd06591d0db0bb08b60f43c77e83e888a9863298183401034a487a0: Status 404 returned error can't find the container with id 45510b373cd06591d0db0bb08b60f43c77e83e888a9863298183401034a487a0 Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.198555 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:08 crc kubenswrapper[4817]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:08 crc kubenswrapper[4817]: set -uo pipefail Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 14 05:34:08 crc kubenswrapper[4817]: HOSTS_FILE="/etc/hosts" Mar 14 05:34:08 crc kubenswrapper[4817]: TEMP_FILE="/etc/hosts.tmp" Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: # Make a temporary file with the old hosts file's attributes. Mar 14 05:34:08 crc kubenswrapper[4817]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 14 05:34:08 crc kubenswrapper[4817]: echo "Failed to preserve hosts file. Exiting." Mar 14 05:34:08 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:08 crc kubenswrapper[4817]: fi Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: while true; do Mar 14 05:34:08 crc kubenswrapper[4817]: declare -A svc_ips Mar 14 05:34:08 crc kubenswrapper[4817]: for svc in "${services[@]}"; do Mar 14 05:34:08 crc kubenswrapper[4817]: # Fetch service IP from cluster dns if present. We make several tries Mar 14 05:34:08 crc kubenswrapper[4817]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 14 05:34:08 crc kubenswrapper[4817]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 14 05:34:08 crc kubenswrapper[4817]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 14 05:34:08 crc kubenswrapper[4817]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:08 crc kubenswrapper[4817]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:08 crc kubenswrapper[4817]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:08 crc kubenswrapper[4817]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 14 05:34:08 crc kubenswrapper[4817]: for i in ${!cmds[*]} Mar 14 05:34:08 crc kubenswrapper[4817]: do Mar 14 05:34:08 crc kubenswrapper[4817]: ips=($(eval "${cmds[i]}")) Mar 14 05:34:08 crc kubenswrapper[4817]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 14 05:34:08 crc kubenswrapper[4817]: svc_ips["${svc}"]="${ips[@]}" Mar 14 05:34:08 crc kubenswrapper[4817]: break Mar 14 05:34:08 crc kubenswrapper[4817]: fi Mar 14 05:34:08 crc kubenswrapper[4817]: done Mar 14 05:34:08 crc kubenswrapper[4817]: done Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: # Update /etc/hosts only if we get valid service IPs Mar 14 05:34:08 crc kubenswrapper[4817]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 14 05:34:08 crc kubenswrapper[4817]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 14 05:34:08 crc kubenswrapper[4817]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 14 05:34:08 crc kubenswrapper[4817]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 14 05:34:08 crc kubenswrapper[4817]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 14 05:34:08 crc kubenswrapper[4817]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 14 05:34:08 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:08 crc kubenswrapper[4817]: continue Mar 14 05:34:08 crc kubenswrapper[4817]: fi Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: # Append resolver entries for services Mar 14 05:34:08 crc kubenswrapper[4817]: rc=0 Mar 14 05:34:08 crc kubenswrapper[4817]: for svc in "${!svc_ips[@]}"; do Mar 14 05:34:08 crc kubenswrapper[4817]: for ip in ${svc_ips[${svc}]}; do Mar 14 05:34:08 crc kubenswrapper[4817]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 14 05:34:08 crc kubenswrapper[4817]: done Mar 14 05:34:08 crc kubenswrapper[4817]: done Mar 14 05:34:08 crc kubenswrapper[4817]: if [[ $rc -ne 0 ]]; then Mar 14 05:34:08 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:08 crc kubenswrapper[4817]: continue Mar 14 05:34:08 crc kubenswrapper[4817]: fi Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: Mar 14 05:34:08 crc kubenswrapper[4817]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 14 05:34:08 crc kubenswrapper[4817]: # Replace /etc/hosts with our modified version if needed Mar 14 05:34:08 crc kubenswrapper[4817]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 14 05:34:08 crc kubenswrapper[4817]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 14 05:34:08 crc kubenswrapper[4817]: fi Mar 14 05:34:08 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:08 crc kubenswrapper[4817]: unset svc_ips Mar 14 05:34:08 crc kubenswrapper[4817]: done Mar 14 05:34:08 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvlks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-v2gnk_openshift-dns(7626bd0c-9420-4e61-98b0-00e4c9eb21f2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:08 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.200128 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-v2gnk" podUID="7626bd0c-9420-4e61-98b0-00e4c9eb21f2" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.238257 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-f8hwl"] Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.238618 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jlnmq"] Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.238811 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.239700 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wdf7p"] Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.240008 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.240148 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.242283 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.242872 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.243533 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.243552 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.244574 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.244625 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.245353 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.245376 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.245403 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.245647 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.245826 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.245837 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.253512 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.253550 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.253560 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.253577 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.253589 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.263051 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.275824 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.288105 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.302207 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.318471 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.333934 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.350710 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.356250 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.356324 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.356345 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.356375 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.356397 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.362316 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.373872 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383006 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/676c3e1e-370b-4a49-80c6-27422d2d1d56-proxy-tls\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383064 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-k8s-cni-cncf-io\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383134 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-kubelet\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383170 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-os-release\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383207 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-etc-kubernetes\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383258 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-daemon-config\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383296 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgs6\" (UniqueName: \"kubernetes.io/projected/44e2523e-6f4b-475e-b733-a45e3744f774-kube-api-access-8zgs6\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383334 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/44e2523e-6f4b-475e-b733-a45e3744f774-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383370 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-socket-dir-parent\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383573 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwj2\" (UniqueName: \"kubernetes.io/projected/217c6f57-e799-4243-86ea-5b76c95c95ec-kube-api-access-bvwj2\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383667 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-system-cni-dir\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383772 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/676c3e1e-370b-4a49-80c6-27422d2d1d56-rootfs\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383823 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/676c3e1e-370b-4a49-80c6-27422d2d1d56-mcd-auth-proxy-config\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383862 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-cni-bin\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383897 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-cnibin\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.383968 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/217c6f57-e799-4243-86ea-5b76c95c95ec-cni-binary-copy\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384074 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-conf-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384139 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-multus-certs\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384164 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-os-release\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384214 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-hostroot\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384237 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtmnv\" (UniqueName: \"kubernetes.io/projected/676c3e1e-370b-4a49-80c6-27422d2d1d56-kube-api-access-vtmnv\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384253 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-cni-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384268 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-cnibin\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384284 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-netns\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384309 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-system-cni-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384334 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/44e2523e-6f4b-475e-b733-a45e3744f774-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384509 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-cni-multus\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.384573 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.387537 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.401195 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.413210 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.428499 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.446055 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.459765 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.459809 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.459823 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.459841 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.459852 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.460934 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.470926 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485225 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-cni-multus\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485288 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485318 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/676c3e1e-370b-4a49-80c6-27422d2d1d56-proxy-tls\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485346 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-k8s-cni-cncf-io\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485374 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-cni-multus\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485406 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-kubelet\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485430 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-os-release\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485462 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-daemon-config\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485491 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-etc-kubernetes\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485520 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgs6\" (UniqueName: \"kubernetes.io/projected/44e2523e-6f4b-475e-b733-a45e3744f774-kube-api-access-8zgs6\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485554 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-socket-dir-parent\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485608 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-kubelet\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485642 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-etc-kubernetes\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485636 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-k8s-cni-cncf-io\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485739 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/44e2523e-6f4b-475e-b733-a45e3744f774-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485778 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-os-release\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485818 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-socket-dir-parent\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486013 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/676c3e1e-370b-4a49-80c6-27422d2d1d56-rootfs\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486500 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/44e2523e-6f4b-475e-b733-a45e3744f774-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.485764 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/676c3e1e-370b-4a49-80c6-27422d2d1d56-rootfs\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486561 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/676c3e1e-370b-4a49-80c6-27422d2d1d56-mcd-auth-proxy-config\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486580 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-cni-bin\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486595 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwj2\" (UniqueName: \"kubernetes.io/projected/217c6f57-e799-4243-86ea-5b76c95c95ec-kube-api-access-bvwj2\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486614 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-system-cni-dir\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486629 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/217c6f57-e799-4243-86ea-5b76c95c95ec-cni-binary-copy\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486645 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-conf-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486660 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-cnibin\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486683 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-hostroot\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486698 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-multus-certs\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486712 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-os-release\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486728 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtmnv\" (UniqueName: \"kubernetes.io/projected/676c3e1e-370b-4a49-80c6-27422d2d1d56-kube-api-access-vtmnv\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486745 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-cni-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486763 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-cnibin\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486817 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-netns\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486832 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-system-cni-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486847 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/44e2523e-6f4b-475e-b733-a45e3744f774-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.486944 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487052 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-hostroot\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487099 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-daemon-config\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487151 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-system-cni-dir\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487190 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-var-lib-cni-bin\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487210 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-netns\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487293 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-cnibin\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487362 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-system-cni-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487451 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-os-release\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487473 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-conf-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487510 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-host-run-multus-certs\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487547 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/217c6f57-e799-4243-86ea-5b76c95c95ec-multus-cni-dir\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.487568 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/44e2523e-6f4b-475e-b733-a45e3744f774-cnibin\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.488025 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/217c6f57-e799-4243-86ea-5b76c95c95ec-cni-binary-copy\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.488800 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.489209 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/676c3e1e-370b-4a49-80c6-27422d2d1d56-mcd-auth-proxy-config\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.489364 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/676c3e1e-370b-4a49-80c6-27422d2d1d56-proxy-tls\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.490177 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/44e2523e-6f4b-475e-b733-a45e3744f774-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.510607 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtmnv\" (UniqueName: \"kubernetes.io/projected/676c3e1e-370b-4a49-80c6-27422d2d1d56-kube-api-access-vtmnv\") pod \"machine-config-daemon-f8hwl\" (UID: \"676c3e1e-370b-4a49-80c6-27422d2d1d56\") " pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.513225 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.515696 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwj2\" (UniqueName: \"kubernetes.io/projected/217c6f57-e799-4243-86ea-5b76c95c95ec-kube-api-access-bvwj2\") pod \"multus-wdf7p\" (UID: \"217c6f57-e799-4243-86ea-5b76c95c95ec\") " pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.516317 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgs6\" (UniqueName: \"kubernetes.io/projected/44e2523e-6f4b-475e-b733-a45e3744f774-kube-api-access-8zgs6\") pod \"multus-additional-cni-plugins-jlnmq\" (UID: \"44e2523e-6f4b-475e-b733-a45e3744f774\") " pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.530623 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.550216 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.563258 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.563330 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.563343 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.563362 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.563375 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.564589 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.577033 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.579197 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wdf7p" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.580155 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.581342 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.590019 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" Mar 14 05:34:08 crc kubenswrapper[4817]: W0314 05:34:08.596515 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217c6f57_e799_4243_86ea_5b76c95c95ec.slice/crio-d9f13fcbc8541f1fd20b405807febbc880d9f4045f2935e3fcc9f327784f67f1 WatchSource:0}: Error finding container d9f13fcbc8541f1fd20b405807febbc880d9f4045f2935e3fcc9f327784f67f1: Status 404 returned error can't find the container with id d9f13fcbc8541f1fd20b405807febbc880d9f4045f2935e3fcc9f327784f67f1 Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.600127 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:08 crc kubenswrapper[4817]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 14 05:34:08 crc kubenswrapper[4817]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 14 05:34:08 crc kubenswrapper[4817]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvwj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-wdf7p_openshift-multus(217c6f57-e799-4243-86ea-5b76c95c95ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:08 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.601844 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-wdf7p" podUID="217c6f57-e799-4243-86ea-5b76c95c95ec" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.607920 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zgs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-jlnmq_openshift-multus(44e2523e-6f4b-475e-b733-a45e3744f774): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.609208 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" podUID="44e2523e-6f4b-475e-b733-a45e3744f774" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.615177 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tntn6"] Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.616187 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.619077 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.619103 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.619135 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.619340 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.619689 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.619881 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.620472 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.627275 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.637829 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.649867 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.665645 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.667090 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.667135 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.667150 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.667172 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.667186 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.677056 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.686447 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.698828 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.710988 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.722278 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.756421 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.769769 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.770605 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.770650 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.770663 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.770683 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.770695 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.781567 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790378 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-kubelet\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790420 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-env-overrides\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790457 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790482 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-bin\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790506 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-ovn\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790527 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-slash\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790600 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-systemd-units\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790637 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-config\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790671 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-var-lib-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790707 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-script-lib\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790755 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-log-socket\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790784 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovn-node-metrics-cert\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.790813 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7fm\" (UniqueName: \"kubernetes.io/projected/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-kube-api-access-xh7fm\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791028 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-ovn-kubernetes\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791105 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-netns\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791135 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-systemd\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791158 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791215 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-etc-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791249 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-node-log\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.791316 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-netd\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.873941 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.874029 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.874056 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.874091 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.874117 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892497 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovn-node-metrics-cert\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892563 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7fm\" (UniqueName: \"kubernetes.io/projected/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-kube-api-access-xh7fm\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892620 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-ovn-kubernetes\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892653 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-netns\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892684 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-systemd\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892719 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892764 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-etc-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892799 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-node-log\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892831 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-netd\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892878 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-kubelet\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.892912 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-env-overrides\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893026 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893055 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-bin\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893086 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-ovn\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893118 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-slash\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893202 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-systemd-units\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893236 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-config\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893231 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-etc-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893283 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-var-lib-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893346 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-script-lib\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893380 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-log-socket\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893380 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-node-log\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893483 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-log-socket\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893497 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-netd\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893552 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893579 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-kubelet\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893602 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-bin\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893638 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-ovn-kubernetes\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893654 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-ovn\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893231 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-netns\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893664 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893706 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-slash\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893692 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-systemd\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893756 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-var-lib-openvswitch\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.893816 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-systemd-units\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.894044 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-env-overrides\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.894311 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-config\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.894940 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-script-lib\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.897439 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovn-node-metrics-cert\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.914153 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7fm\" (UniqueName: \"kubernetes.io/projected/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-kube-api-access-xh7fm\") pod \"ovnkube-node-tntn6\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.937653 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:08 crc kubenswrapper[4817]: W0314 05:34:08.952832 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd2a2f95_23c1_4605_b9b2_178f7ef2a7aa.slice/crio-9596a2376847a524fe549f6ffcfc0cf1fdebf07c44636e372d786b2dcce684f1 WatchSource:0}: Error finding container 9596a2376847a524fe549f6ffcfc0cf1fdebf07c44636e372d786b2dcce684f1: Status 404 returned error can't find the container with id 9596a2376847a524fe549f6ffcfc0cf1fdebf07c44636e372d786b2dcce684f1 Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.956016 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:08 crc kubenswrapper[4817]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 14 05:34:08 crc kubenswrapper[4817]: apiVersion: v1 Mar 14 05:34:08 crc kubenswrapper[4817]: clusters: Mar 14 05:34:08 crc kubenswrapper[4817]: - cluster: Mar 14 05:34:08 crc kubenswrapper[4817]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 14 05:34:08 crc kubenswrapper[4817]: server: https://api-int.crc.testing:6443 Mar 14 05:34:08 crc kubenswrapper[4817]: name: default-cluster Mar 14 05:34:08 crc kubenswrapper[4817]: contexts: Mar 14 05:34:08 crc kubenswrapper[4817]: - context: Mar 14 05:34:08 crc kubenswrapper[4817]: cluster: default-cluster Mar 14 05:34:08 crc kubenswrapper[4817]: namespace: default Mar 14 05:34:08 crc kubenswrapper[4817]: user: default-auth Mar 14 05:34:08 crc kubenswrapper[4817]: name: default-context Mar 14 05:34:08 crc kubenswrapper[4817]: current-context: default-context Mar 14 05:34:08 crc kubenswrapper[4817]: kind: Config Mar 14 05:34:08 crc kubenswrapper[4817]: preferences: {} Mar 14 05:34:08 crc kubenswrapper[4817]: users: Mar 14 05:34:08 crc kubenswrapper[4817]: - name: default-auth Mar 14 05:34:08 crc kubenswrapper[4817]: user: Mar 14 05:34:08 crc kubenswrapper[4817]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 14 05:34:08 crc kubenswrapper[4817]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 14 05:34:08 crc kubenswrapper[4817]: EOF Mar 14 05:34:08 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh7fm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-tntn6_openshift-ovn-kubernetes(dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:08 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:08 crc kubenswrapper[4817]: E0314 05:34:08.957300 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.977497 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.977600 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.977617 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.977643 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:08 crc kubenswrapper[4817]: I0314 05:34:08.977659 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:08Z","lastTransitionTime":"2026-03-14T05:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.080729 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.080807 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.080834 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.080864 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.080884 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.184265 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.184330 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.184344 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.184368 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.184383 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.191788 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"9596a2376847a524fe549f6ffcfc0cf1fdebf07c44636e372d786b2dcce684f1"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.193370 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerStarted","Data":"fa75abfdc1077d39de56503bea44a4c17a2ba686d6d428cdbe9b6ea13e26d6fb"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.194569 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdf7p" event={"ID":"217c6f57-e799-4243-86ea-5b76c95c95ec","Type":"ContainerStarted","Data":"d9f13fcbc8541f1fd20b405807febbc880d9f4045f2935e3fcc9f327784f67f1"} Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.195419 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zgs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-jlnmq_openshift-multus(44e2523e-6f4b-475e-b733-a45e3744f774): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.196222 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"5eb05a46ed2b48295d13388344d3321921746520878a6dca60a05f2db77a2bf8"} Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.196299 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:09 crc kubenswrapper[4817]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 14 05:34:09 crc kubenswrapper[4817]: apiVersion: v1 Mar 14 05:34:09 crc kubenswrapper[4817]: clusters: Mar 14 05:34:09 crc kubenswrapper[4817]: - cluster: Mar 14 05:34:09 crc kubenswrapper[4817]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 14 05:34:09 crc kubenswrapper[4817]: server: https://api-int.crc.testing:6443 Mar 14 05:34:09 crc kubenswrapper[4817]: name: default-cluster Mar 14 05:34:09 crc kubenswrapper[4817]: contexts: Mar 14 05:34:09 crc kubenswrapper[4817]: - context: Mar 14 05:34:09 crc kubenswrapper[4817]: cluster: default-cluster Mar 14 05:34:09 crc kubenswrapper[4817]: namespace: default Mar 14 05:34:09 crc kubenswrapper[4817]: user: default-auth Mar 14 05:34:09 crc kubenswrapper[4817]: name: default-context Mar 14 05:34:09 crc kubenswrapper[4817]: current-context: default-context Mar 14 05:34:09 crc kubenswrapper[4817]: kind: Config Mar 14 05:34:09 crc kubenswrapper[4817]: preferences: {} Mar 14 05:34:09 crc kubenswrapper[4817]: users: Mar 14 05:34:09 crc kubenswrapper[4817]: - name: default-auth Mar 14 05:34:09 crc kubenswrapper[4817]: user: Mar 14 05:34:09 crc kubenswrapper[4817]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 14 05:34:09 crc kubenswrapper[4817]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 14 05:34:09 crc kubenswrapper[4817]: EOF Mar 14 05:34:09 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh7fm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-tntn6_openshift-ovn-kubernetes(dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:09 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.198870 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" podUID="44e2523e-6f4b-475e-b733-a45e3744f774" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.199086 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:09 crc kubenswrapper[4817]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 14 05:34:09 crc kubenswrapper[4817]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 14 05:34:09 crc kubenswrapper[4817]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvwj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-wdf7p_openshift-multus(217c6f57-e799-4243-86ea-5b76c95c95ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:09 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.200602 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-wdf7p" podUID="217c6f57-e799-4243-86ea-5b76c95c95ec" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.201825 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.203626 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v2gnk" event={"ID":"7626bd0c-9420-4e61-98b0-00e4c9eb21f2","Type":"ContainerStarted","Data":"45510b373cd06591d0db0bb08b60f43c77e83e888a9863298183401034a487a0"} Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.199145 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.204683 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.206247 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.206055 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:09 crc kubenswrapper[4817]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:09 crc kubenswrapper[4817]: set -uo pipefail Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 14 05:34:09 crc kubenswrapper[4817]: HOSTS_FILE="/etc/hosts" Mar 14 05:34:09 crc kubenswrapper[4817]: TEMP_FILE="/etc/hosts.tmp" Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: # Make a temporary file with the old hosts file's attributes. Mar 14 05:34:09 crc kubenswrapper[4817]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 14 05:34:09 crc kubenswrapper[4817]: echo "Failed to preserve hosts file. Exiting." Mar 14 05:34:09 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:09 crc kubenswrapper[4817]: fi Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: while true; do Mar 14 05:34:09 crc kubenswrapper[4817]: declare -A svc_ips Mar 14 05:34:09 crc kubenswrapper[4817]: for svc in "${services[@]}"; do Mar 14 05:34:09 crc kubenswrapper[4817]: # Fetch service IP from cluster dns if present. We make several tries Mar 14 05:34:09 crc kubenswrapper[4817]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 14 05:34:09 crc kubenswrapper[4817]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 14 05:34:09 crc kubenswrapper[4817]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 14 05:34:09 crc kubenswrapper[4817]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:09 crc kubenswrapper[4817]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:09 crc kubenswrapper[4817]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:09 crc kubenswrapper[4817]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 14 05:34:09 crc kubenswrapper[4817]: for i in ${!cmds[*]} Mar 14 05:34:09 crc kubenswrapper[4817]: do Mar 14 05:34:09 crc kubenswrapper[4817]: ips=($(eval "${cmds[i]}")) Mar 14 05:34:09 crc kubenswrapper[4817]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 14 05:34:09 crc kubenswrapper[4817]: svc_ips["${svc}"]="${ips[@]}" Mar 14 05:34:09 crc kubenswrapper[4817]: break Mar 14 05:34:09 crc kubenswrapper[4817]: fi Mar 14 05:34:09 crc kubenswrapper[4817]: done Mar 14 05:34:09 crc kubenswrapper[4817]: done Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: # Update /etc/hosts only if we get valid service IPs Mar 14 05:34:09 crc kubenswrapper[4817]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 14 05:34:09 crc kubenswrapper[4817]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 14 05:34:09 crc kubenswrapper[4817]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 14 05:34:09 crc kubenswrapper[4817]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 14 05:34:09 crc kubenswrapper[4817]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 14 05:34:09 crc kubenswrapper[4817]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 14 05:34:09 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:09 crc kubenswrapper[4817]: continue Mar 14 05:34:09 crc kubenswrapper[4817]: fi Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: # Append resolver entries for services Mar 14 05:34:09 crc kubenswrapper[4817]: rc=0 Mar 14 05:34:09 crc kubenswrapper[4817]: for svc in "${!svc_ips[@]}"; do Mar 14 05:34:09 crc kubenswrapper[4817]: for ip in ${svc_ips[${svc}]}; do Mar 14 05:34:09 crc kubenswrapper[4817]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 14 05:34:09 crc kubenswrapper[4817]: done Mar 14 05:34:09 crc kubenswrapper[4817]: done Mar 14 05:34:09 crc kubenswrapper[4817]: if [[ $rc -ne 0 ]]; then Mar 14 05:34:09 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:09 crc kubenswrapper[4817]: continue Mar 14 05:34:09 crc kubenswrapper[4817]: fi Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: Mar 14 05:34:09 crc kubenswrapper[4817]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 14 05:34:09 crc kubenswrapper[4817]: # Replace /etc/hosts with our modified version if needed Mar 14 05:34:09 crc kubenswrapper[4817]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 14 05:34:09 crc kubenswrapper[4817]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 14 05:34:09 crc kubenswrapper[4817]: fi Mar 14 05:34:09 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:09 crc kubenswrapper[4817]: unset svc_ips Mar 14 05:34:09 crc kubenswrapper[4817]: done Mar 14 05:34:09 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvlks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-v2gnk_openshift-dns(7626bd0c-9420-4e61-98b0-00e4c9eb21f2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:09 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.209228 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-v2gnk" podUID="7626bd0c-9420-4e61-98b0-00e4c9eb21f2" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.226330 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.242133 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.255764 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.265779 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.279193 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.287286 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.287337 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.287356 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.287386 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.287405 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.298815 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.314111 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.327344 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.342260 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.353987 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.369459 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.380760 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.390727 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.390789 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.390816 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.390852 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.390877 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.396789 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.407631 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.419674 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.431565 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.439264 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.449332 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.467207 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.483532 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.493822 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.493897 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.493944 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.493971 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.493995 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.497176 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.525200 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.538812 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.551564 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.597496 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.597541 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.597558 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.597589 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.597609 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.700496 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.700563 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.700582 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.700612 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.700634 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.732124 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.732215 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.732154 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.732355 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.732472 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:09 crc kubenswrapper[4817]: E0314 05:34:09.732567 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.803266 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.803388 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.803404 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.803422 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.803439 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.907182 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.907240 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.907251 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.907271 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:09 crc kubenswrapper[4817]: I0314 05:34:09.907286 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:09Z","lastTransitionTime":"2026-03-14T05:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.010107 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.010163 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.010177 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.010200 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.010212 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.114593 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.114646 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.114659 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.114681 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.114696 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.218344 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.218455 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.218487 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.218522 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.218545 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.321786 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.321875 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.321901 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.321963 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.321985 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.424953 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.425023 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.425039 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.425063 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.425078 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.528272 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.528316 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.528325 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.528339 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.528348 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.631887 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.632325 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.632468 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.632605 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.632758 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.737002 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.737048 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.737060 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.737077 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.737090 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.840193 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.840276 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.840300 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.840335 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.840360 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.944053 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.944111 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.944126 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.944152 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:10 crc kubenswrapper[4817]: I0314 05:34:10.944166 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:10Z","lastTransitionTime":"2026-03-14T05:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.047340 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.047630 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.047713 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.047925 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.048018 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.151089 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.151146 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.151170 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.151204 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.151223 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.254544 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.254641 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.254670 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.254704 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.254730 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.358630 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.358747 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.358771 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.358804 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.358831 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.428162 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.428317 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.428443 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:19.428399463 +0000 UTC m=+113.466660249 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.428525 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.428609 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:19.428580848 +0000 UTC m=+113.466841634 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.466544 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.466612 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.466630 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.466659 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.466680 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.530362 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.530626 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.530978 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531001 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.530955 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531080 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:19.531054827 +0000 UTC m=+113.569315603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.531113 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531138 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531216 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:19.531194041 +0000 UTC m=+113.569454827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531236 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531259 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531273 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.531316 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:19.531300084 +0000 UTC m=+113.569560870 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.570778 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.570830 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.570842 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.570861 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.570874 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.674127 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.674208 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.674232 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.674264 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.674287 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.731380 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.731492 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.731391 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.731559 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.731693 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:11 crc kubenswrapper[4817]: E0314 05:34:11.731975 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.777416 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.777472 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.777494 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.777519 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.777537 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.881072 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.881147 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.881202 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.881249 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.881276 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.983810 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.983859 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.983868 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.983888 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:11 crc kubenswrapper[4817]: I0314 05:34:11.983933 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:11Z","lastTransitionTime":"2026-03-14T05:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.087027 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.087337 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.087423 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.087515 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.087606 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.190372 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.190450 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.190471 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.190499 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.190522 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.294299 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.294369 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.294387 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.294414 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.294432 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.398246 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.398310 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.398346 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.398388 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.398414 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.502029 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.502430 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.502585 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.502720 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.502843 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.606515 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.606578 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.606601 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.606630 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.606656 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.710197 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.710271 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.710288 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.710313 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.710334 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.813965 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.814053 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.814071 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.814097 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.814115 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.917766 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.918396 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.918433 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.918467 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:12 crc kubenswrapper[4817]: I0314 05:34:12.918488 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:12Z","lastTransitionTime":"2026-03-14T05:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.022701 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.022781 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.022799 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.022827 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.022847 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.127076 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.127151 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.127172 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.127200 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.127221 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.230042 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.230131 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.230152 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.230184 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.230204 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.332741 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.332800 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.332812 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.332838 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.332853 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.435793 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.435837 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.435848 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.435867 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.435882 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.540203 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.540312 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.540332 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.540363 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.540384 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.643267 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.643354 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.643371 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.643394 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.643411 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.731718 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.731744 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:13 crc kubenswrapper[4817]: E0314 05:34:13.731860 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.731967 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:13 crc kubenswrapper[4817]: E0314 05:34:13.732007 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:13 crc kubenswrapper[4817]: E0314 05:34:13.732183 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.746285 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.746364 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.746382 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.746412 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.746434 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.849724 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.849792 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.849809 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.849835 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.849853 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.953527 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.953597 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.953618 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.953650 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:13 crc kubenswrapper[4817]: I0314 05:34:13.953673 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:13Z","lastTransitionTime":"2026-03-14T05:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.056187 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.056244 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.056261 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.056285 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.056301 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.160019 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.160076 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.160096 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.160126 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.160150 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.263487 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.263559 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.263573 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.263599 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.263613 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.366725 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.366814 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.366845 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.366877 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.366942 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.470090 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.470161 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.470218 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.470243 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.470262 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.512045 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-plxwm"] Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.512753 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.516569 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.516864 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.516638 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.517426 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.535280 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.551389 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.565687 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b55c361-074b-4eec-a066-14d7767cbad2-host\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.565797 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b55c361-074b-4eec-a066-14d7767cbad2-serviceca\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.565873 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qrh\" (UniqueName: \"kubernetes.io/projected/6b55c361-074b-4eec-a066-14d7767cbad2-kube-api-access-22qrh\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.569505 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.573104 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.573175 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.573201 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.573242 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.573273 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.584280 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.598376 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.616062 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.635689 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.654043 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.667801 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qrh\" (UniqueName: \"kubernetes.io/projected/6b55c361-074b-4eec-a066-14d7767cbad2-kube-api-access-22qrh\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.667969 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b55c361-074b-4eec-a066-14d7767cbad2-host\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.668061 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b55c361-074b-4eec-a066-14d7767cbad2-serviceca\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.668183 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b55c361-074b-4eec-a066-14d7767cbad2-host\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.670294 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b55c361-074b-4eec-a066-14d7767cbad2-serviceca\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.670325 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.676077 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.676222 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.676263 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.676290 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.676311 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.689761 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.703676 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.713672 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qrh\" (UniqueName: \"kubernetes.io/projected/6b55c361-074b-4eec-a066-14d7767cbad2-kube-api-access-22qrh\") pod \"node-ca-plxwm\" (UID: \"6b55c361-074b-4eec-a066-14d7767cbad2\") " pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.714383 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.729971 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.779383 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.779438 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.779449 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.779468 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.779478 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.839764 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-plxwm" Mar 14 05:34:14 crc kubenswrapper[4817]: E0314 05:34:14.861776 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:14 crc kubenswrapper[4817]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 14 05:34:14 crc kubenswrapper[4817]: while [ true ]; Mar 14 05:34:14 crc kubenswrapper[4817]: do Mar 14 05:34:14 crc kubenswrapper[4817]: for f in $(ls /tmp/serviceca); do Mar 14 05:34:14 crc kubenswrapper[4817]: echo $f Mar 14 05:34:14 crc kubenswrapper[4817]: ca_file_path="/tmp/serviceca/${f}" Mar 14 05:34:14 crc kubenswrapper[4817]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 14 05:34:14 crc kubenswrapper[4817]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 14 05:34:14 crc kubenswrapper[4817]: if [ -e "${reg_dir_path}" ]; then Mar 14 05:34:14 crc kubenswrapper[4817]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 14 05:34:14 crc kubenswrapper[4817]: else Mar 14 05:34:14 crc kubenswrapper[4817]: mkdir $reg_dir_path Mar 14 05:34:14 crc kubenswrapper[4817]: cp $ca_file_path $reg_dir_path/ca.crt Mar 14 05:34:14 crc kubenswrapper[4817]: fi Mar 14 05:34:14 crc kubenswrapper[4817]: done Mar 14 05:34:14 crc kubenswrapper[4817]: for d in $(ls /etc/docker/certs.d); do Mar 14 05:34:14 crc kubenswrapper[4817]: echo $d Mar 14 05:34:14 crc kubenswrapper[4817]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 14 05:34:14 crc kubenswrapper[4817]: reg_conf_path="/tmp/serviceca/${dp}" Mar 14 05:34:14 crc kubenswrapper[4817]: if [ ! -e "${reg_conf_path}" ]; then Mar 14 05:34:14 crc kubenswrapper[4817]: rm -rf /etc/docker/certs.d/$d Mar 14 05:34:14 crc kubenswrapper[4817]: fi Mar 14 05:34:14 crc kubenswrapper[4817]: done Mar 14 05:34:14 crc kubenswrapper[4817]: sleep 60 & wait ${!} Mar 14 05:34:14 crc kubenswrapper[4817]: done Mar 14 05:34:14 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22qrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-plxwm_openshift-image-registry(6b55c361-074b-4eec-a066-14d7767cbad2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:14 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:14 crc kubenswrapper[4817]: E0314 05:34:14.863622 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-plxwm" podUID="6b55c361-074b-4eec-a066-14d7767cbad2" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.882909 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.883264 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.883360 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.883454 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.883521 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.986618 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.987348 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.987441 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.987513 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:14 crc kubenswrapper[4817]: I0314 05:34:14.987576 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:14Z","lastTransitionTime":"2026-03-14T05:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.091328 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.091402 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.091426 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.091453 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.091471 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.195346 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.195428 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.195447 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.195473 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.195496 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.220911 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-plxwm" event={"ID":"6b55c361-074b-4eec-a066-14d7767cbad2","Type":"ContainerStarted","Data":"6bd815b408709d8cbb5e4fb39ecc28fbc23fda921c923a8003f0de6ce3f38d02"} Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.222934 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:15 crc kubenswrapper[4817]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 14 05:34:15 crc kubenswrapper[4817]: while [ true ]; Mar 14 05:34:15 crc kubenswrapper[4817]: do Mar 14 05:34:15 crc kubenswrapper[4817]: for f in $(ls /tmp/serviceca); do Mar 14 05:34:15 crc kubenswrapper[4817]: echo $f Mar 14 05:34:15 crc kubenswrapper[4817]: ca_file_path="/tmp/serviceca/${f}" Mar 14 05:34:15 crc kubenswrapper[4817]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 14 05:34:15 crc kubenswrapper[4817]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 14 05:34:15 crc kubenswrapper[4817]: if [ -e "${reg_dir_path}" ]; then Mar 14 05:34:15 crc kubenswrapper[4817]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 14 05:34:15 crc kubenswrapper[4817]: else Mar 14 05:34:15 crc kubenswrapper[4817]: mkdir $reg_dir_path Mar 14 05:34:15 crc kubenswrapper[4817]: cp $ca_file_path $reg_dir_path/ca.crt Mar 14 05:34:15 crc kubenswrapper[4817]: fi Mar 14 05:34:15 crc kubenswrapper[4817]: done Mar 14 05:34:15 crc kubenswrapper[4817]: for d in $(ls /etc/docker/certs.d); do Mar 14 05:34:15 crc kubenswrapper[4817]: echo $d Mar 14 05:34:15 crc kubenswrapper[4817]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 14 05:34:15 crc kubenswrapper[4817]: reg_conf_path="/tmp/serviceca/${dp}" Mar 14 05:34:15 crc kubenswrapper[4817]: if [ ! -e "${reg_conf_path}" ]; then Mar 14 05:34:15 crc kubenswrapper[4817]: rm -rf /etc/docker/certs.d/$d Mar 14 05:34:15 crc kubenswrapper[4817]: fi Mar 14 05:34:15 crc kubenswrapper[4817]: done Mar 14 05:34:15 crc kubenswrapper[4817]: sleep 60 & wait ${!} Mar 14 05:34:15 crc kubenswrapper[4817]: done Mar 14 05:34:15 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22qrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-plxwm_openshift-image-registry(6b55c361-074b-4eec-a066-14d7767cbad2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:15 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.224262 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-plxwm" podUID="6b55c361-074b-4eec-a066-14d7767cbad2" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.239881 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.251202 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.266577 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.285189 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.297969 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.298121 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.298205 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.298276 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.298368 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.304821 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.318382 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.331462 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.331513 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.331531 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.331556 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.331574 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.344437 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.346329 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.354475 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.354834 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.354940 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.354969 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.354992 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.360417 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.372011 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.374585 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.377029 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.377088 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.377107 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.377136 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.377156 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.387731 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.390456 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.395347 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.395410 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.395427 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.395452 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.395469 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.402793 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.409891 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.415171 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.415422 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.415496 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.415531 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.415557 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.415575 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.427716 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.428000 4817 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.428811 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.430712 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.430763 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.430785 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.430818 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.430837 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.535003 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.535131 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.535152 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.535183 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.535207 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.638358 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.638441 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.638471 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.638509 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.638538 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.731655 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.731767 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.731654 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.731862 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.731995 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:15 crc kubenswrapper[4817]: E0314 05:34:15.732184 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.741529 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.741598 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.741623 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.741654 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.741677 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.845223 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.845308 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.845326 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.845357 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.845380 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.949369 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.949464 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.949491 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.949533 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:15 crc kubenswrapper[4817]: I0314 05:34:15.949553 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:15Z","lastTransitionTime":"2026-03-14T05:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.052548 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.052624 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.052644 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.052675 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.052695 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.156186 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.156260 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.156278 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.156310 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.156331 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.258820 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.258882 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.258929 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.258957 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.258976 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.362168 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.362225 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.362247 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.362272 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.362291 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.465838 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.465940 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.465960 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.465992 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.466010 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.569872 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.569976 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.570009 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.570042 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.570064 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.673073 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.673123 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.673140 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.673163 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.673179 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.737548 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:16 crc kubenswrapper[4817]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:16 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:16 crc kubenswrapper[4817]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 05:34:16 crc kubenswrapper[4817]: source /etc/kubernetes/apiserver-url.env Mar 14 05:34:16 crc kubenswrapper[4817]: else Mar 14 05:34:16 crc kubenswrapper[4817]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 05:34:16 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:16 crc kubenswrapper[4817]: fi Mar 14 05:34:16 crc kubenswrapper[4817]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 05:34:16 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:16 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.739254 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.741620 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:16 crc kubenswrapper[4817]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:16 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:16 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:16 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:16 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:16 crc kubenswrapper[4817]: fi Mar 14 05:34:16 crc kubenswrapper[4817]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 05:34:16 crc kubenswrapper[4817]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 05:34:16 crc kubenswrapper[4817]: ho_enable="--enable-hybrid-overlay" Mar 14 05:34:16 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 05:34:16 crc kubenswrapper[4817]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 05:34:16 crc kubenswrapper[4817]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 05:34:16 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:16 crc kubenswrapper[4817]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 05:34:16 crc kubenswrapper[4817]: --webhook-host=127.0.0.1 \ Mar 14 05:34:16 crc kubenswrapper[4817]: --webhook-port=9743 \ Mar 14 05:34:16 crc kubenswrapper[4817]: ${ho_enable} \ Mar 14 05:34:16 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:16 crc kubenswrapper[4817]: --disable-approver \ Mar 14 05:34:16 crc kubenswrapper[4817]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 05:34:16 crc kubenswrapper[4817]: --wait-for-kubernetes-api=200s \ Mar 14 05:34:16 crc kubenswrapper[4817]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 05:34:16 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:16 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:16 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.744280 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:16 crc kubenswrapper[4817]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:16 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:16 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:16 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:16 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:16 crc kubenswrapper[4817]: fi Mar 14 05:34:16 crc kubenswrapper[4817]: Mar 14 05:34:16 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 05:34:16 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:16 crc kubenswrapper[4817]: --disable-webhook \ Mar 14 05:34:16 crc kubenswrapper[4817]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 05:34:16 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:16 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:16 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.744447 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.745476 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 05:34:16 crc kubenswrapper[4817]: E0314 05:34:16.746616 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.751141 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.765841 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.776153 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.776199 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.776221 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.776252 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.776275 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.778456 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.797285 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.804924 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.820034 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.834408 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.862193 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.876054 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.879692 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.879766 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.879790 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.879819 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.879840 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.891134 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.903627 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.921543 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.935633 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.983091 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.983146 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.983158 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.983174 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:16 crc kubenswrapper[4817]: I0314 05:34:16.983184 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:16Z","lastTransitionTime":"2026-03-14T05:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.085473 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.085518 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.085530 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.085548 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.085561 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.187367 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.187406 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.187416 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.187431 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.187441 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.271572 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.289726 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.291017 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.291118 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.291220 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.291303 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.291363 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.305099 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.320548 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.335549 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.348514 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.364089 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.382849 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.394467 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.394503 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.394513 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.394527 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.394537 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.400630 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.427732 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.449070 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.474411 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.497433 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.497485 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.497505 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.497531 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.497551 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.503847 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.518583 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.600282 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.600330 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.600344 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.600362 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.600373 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.703378 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.703442 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.703461 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.703489 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.703509 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.731525 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.731534 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:17 crc kubenswrapper[4817]: E0314 05:34:17.731773 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.731579 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:17 crc kubenswrapper[4817]: E0314 05:34:17.731931 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:17 crc kubenswrapper[4817]: E0314 05:34:17.732051 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.806385 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.806455 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.806474 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.806504 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.806524 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.909675 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.909728 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.909744 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.909762 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:17 crc kubenswrapper[4817]: I0314 05:34:17.909775 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:17Z","lastTransitionTime":"2026-03-14T05:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.013117 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.013166 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.013177 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.013201 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.013246 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.116736 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.116791 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.116808 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.116837 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.116854 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.220013 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.220086 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.220104 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.220129 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.220147 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.323500 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.323544 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.323554 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.323569 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.323579 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.427015 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.427111 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.427131 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.427158 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.427179 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.530191 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.530278 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.530298 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.530326 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.530344 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.637405 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.637450 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.637469 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.637489 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.637503 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.739672 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.739755 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.739779 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.739809 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.739832 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.842561 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.842654 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.842688 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.842726 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.842751 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.946152 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.946218 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.946236 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.946261 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:18 crc kubenswrapper[4817]: I0314 05:34:18.946280 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:18Z","lastTransitionTime":"2026-03-14T05:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.049705 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.049756 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.049771 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.049789 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.049803 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.152346 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.152384 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.152445 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.152460 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.152472 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.254299 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.254332 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.254340 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.254353 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.254362 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.357302 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.357349 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.357358 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.357372 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.357382 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.460503 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.460585 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.460607 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.460631 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.460649 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.526994 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.527170 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.527269 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:35.52722947 +0000 UTC m=+129.565490226 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.527318 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.527416 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:35.527390234 +0000 UTC m=+129.565651020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.562605 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.562685 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.562704 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.562729 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.562748 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.628089 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.628260 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.628326 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628330 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628413 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628441 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628446 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628481 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628512 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628579 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:35.628503263 +0000 UTC m=+129.666764039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628621 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:35.628599366 +0000 UTC m=+129.666860152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.628861 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.629237 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:35.629017228 +0000 UTC m=+129.667278054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.665617 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.665670 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.665683 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.665699 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.665710 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.731196 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.731217 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.731439 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.731430 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.731772 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.731772 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.734339 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zgs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-jlnmq_openshift-multus(44e2523e-6f4b-475e-b733-a45e3744f774): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.734481 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:19 crc kubenswrapper[4817]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:19 crc kubenswrapper[4817]: set -uo pipefail Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 14 05:34:19 crc kubenswrapper[4817]: HOSTS_FILE="/etc/hosts" Mar 14 05:34:19 crc kubenswrapper[4817]: TEMP_FILE="/etc/hosts.tmp" Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: # Make a temporary file with the old hosts file's attributes. Mar 14 05:34:19 crc kubenswrapper[4817]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 14 05:34:19 crc kubenswrapper[4817]: echo "Failed to preserve hosts file. Exiting." Mar 14 05:34:19 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:19 crc kubenswrapper[4817]: fi Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: while true; do Mar 14 05:34:19 crc kubenswrapper[4817]: declare -A svc_ips Mar 14 05:34:19 crc kubenswrapper[4817]: for svc in "${services[@]}"; do Mar 14 05:34:19 crc kubenswrapper[4817]: # Fetch service IP from cluster dns if present. We make several tries Mar 14 05:34:19 crc kubenswrapper[4817]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 14 05:34:19 crc kubenswrapper[4817]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 14 05:34:19 crc kubenswrapper[4817]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 14 05:34:19 crc kubenswrapper[4817]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:19 crc kubenswrapper[4817]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:19 crc kubenswrapper[4817]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:19 crc kubenswrapper[4817]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 14 05:34:19 crc kubenswrapper[4817]: for i in ${!cmds[*]} Mar 14 05:34:19 crc kubenswrapper[4817]: do Mar 14 05:34:19 crc kubenswrapper[4817]: ips=($(eval "${cmds[i]}")) Mar 14 05:34:19 crc kubenswrapper[4817]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 14 05:34:19 crc kubenswrapper[4817]: svc_ips["${svc}"]="${ips[@]}" Mar 14 05:34:19 crc kubenswrapper[4817]: break Mar 14 05:34:19 crc kubenswrapper[4817]: fi Mar 14 05:34:19 crc kubenswrapper[4817]: done Mar 14 05:34:19 crc kubenswrapper[4817]: done Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: # Update /etc/hosts only if we get valid service IPs Mar 14 05:34:19 crc kubenswrapper[4817]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 14 05:34:19 crc kubenswrapper[4817]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 14 05:34:19 crc kubenswrapper[4817]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 14 05:34:19 crc kubenswrapper[4817]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 14 05:34:19 crc kubenswrapper[4817]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 14 05:34:19 crc kubenswrapper[4817]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 14 05:34:19 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:19 crc kubenswrapper[4817]: continue Mar 14 05:34:19 crc kubenswrapper[4817]: fi Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: # Append resolver entries for services Mar 14 05:34:19 crc kubenswrapper[4817]: rc=0 Mar 14 05:34:19 crc kubenswrapper[4817]: for svc in "${!svc_ips[@]}"; do Mar 14 05:34:19 crc kubenswrapper[4817]: for ip in ${svc_ips[${svc}]}; do Mar 14 05:34:19 crc kubenswrapper[4817]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 14 05:34:19 crc kubenswrapper[4817]: done Mar 14 05:34:19 crc kubenswrapper[4817]: done Mar 14 05:34:19 crc kubenswrapper[4817]: if [[ $rc -ne 0 ]]; then Mar 14 05:34:19 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:19 crc kubenswrapper[4817]: continue Mar 14 05:34:19 crc kubenswrapper[4817]: fi Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: Mar 14 05:34:19 crc kubenswrapper[4817]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 14 05:34:19 crc kubenswrapper[4817]: # Replace /etc/hosts with our modified version if needed Mar 14 05:34:19 crc kubenswrapper[4817]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 14 05:34:19 crc kubenswrapper[4817]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 14 05:34:19 crc kubenswrapper[4817]: fi Mar 14 05:34:19 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:19 crc kubenswrapper[4817]: unset svc_ips Mar 14 05:34:19 crc kubenswrapper[4817]: done Mar 14 05:34:19 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvlks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-v2gnk_openshift-dns(7626bd0c-9420-4e61-98b0-00e4c9eb21f2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:19 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.735554 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" podUID="44e2523e-6f4b-475e-b733-a45e3744f774" Mar 14 05:34:19 crc kubenswrapper[4817]: E0314 05:34:19.735618 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-v2gnk" podUID="7626bd0c-9420-4e61-98b0-00e4c9eb21f2" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.767753 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.767794 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.767804 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.767818 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.767831 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.871778 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.871846 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.871865 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.871896 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.871943 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.974840 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.974951 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.974977 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.975010 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:19 crc kubenswrapper[4817]: I0314 05:34:19.975032 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:19Z","lastTransitionTime":"2026-03-14T05:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.078001 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.078067 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.078080 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.078093 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.078102 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.180044 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.180092 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.180103 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.180121 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.180132 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.283523 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.283573 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.283585 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.283603 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.283623 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.364577 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w"] Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.365057 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.367955 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.368762 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.387082 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.387126 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.387136 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.387154 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.387167 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.388325 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.405076 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.425590 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.436651 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e636294e-01ac-40f2-a057-62894528f233-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.436867 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgzd\" (UniqueName: \"kubernetes.io/projected/e636294e-01ac-40f2-a057-62894528f233-kube-api-access-kcgzd\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.437002 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e636294e-01ac-40f2-a057-62894528f233-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.437043 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e636294e-01ac-40f2-a057-62894528f233-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.437807 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.449128 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.458721 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.471399 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.484309 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.489887 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.489969 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.489982 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.490001 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.490015 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.495216 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.504529 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.514278 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.522525 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.534030 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.538459 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e636294e-01ac-40f2-a057-62894528f233-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.538732 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgzd\" (UniqueName: \"kubernetes.io/projected/e636294e-01ac-40f2-a057-62894528f233-kube-api-access-kcgzd\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.538927 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e636294e-01ac-40f2-a057-62894528f233-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.539107 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e636294e-01ac-40f2-a057-62894528f233-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.540038 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e636294e-01ac-40f2-a057-62894528f233-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.540417 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e636294e-01ac-40f2-a057-62894528f233-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.549318 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.549607 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e636294e-01ac-40f2-a057-62894528f233-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.556130 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgzd\" (UniqueName: \"kubernetes.io/projected/e636294e-01ac-40f2-a057-62894528f233-kube-api-access-kcgzd\") pod \"ovnkube-control-plane-749d76644c-7gf8w\" (UID: \"e636294e-01ac-40f2-a057-62894528f233\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.592794 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.592863 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.592881 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.592937 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.592957 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.688004 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.696250 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.696454 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.696809 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.697058 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.697221 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: W0314 05:34:20.699773 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode636294e_01ac_40f2_a057_62894528f233.slice/crio-1b479446b0542624c7d7b00756c8c24c7bc3f4042f636a64ce1bdcc9d9719624 WatchSource:0}: Error finding container 1b479446b0542624c7d7b00756c8c24c7bc3f4042f636a64ce1bdcc9d9719624: Status 404 returned error can't find the container with id 1b479446b0542624c7d7b00756c8c24c7bc3f4042f636a64ce1bdcc9d9719624 Mar 14 05:34:20 crc kubenswrapper[4817]: E0314 05:34:20.701999 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:20 crc kubenswrapper[4817]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:20 crc kubenswrapper[4817]: set -euo pipefail Mar 14 05:34:20 crc kubenswrapper[4817]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 14 05:34:20 crc kubenswrapper[4817]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 14 05:34:20 crc kubenswrapper[4817]: # As the secret mount is optional we must wait for the files to be present. Mar 14 05:34:20 crc kubenswrapper[4817]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 14 05:34:20 crc kubenswrapper[4817]: TS=$(date +%s) Mar 14 05:34:20 crc kubenswrapper[4817]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 14 05:34:20 crc kubenswrapper[4817]: HAS_LOGGED_INFO=0 Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: log_missing_certs(){ Mar 14 05:34:20 crc kubenswrapper[4817]: CUR_TS=$(date +%s) Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 14 05:34:20 crc kubenswrapper[4817]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 14 05:34:20 crc kubenswrapper[4817]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 14 05:34:20 crc kubenswrapper[4817]: HAS_LOGGED_INFO=1 Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: } Mar 14 05:34:20 crc kubenswrapper[4817]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 14 05:34:20 crc kubenswrapper[4817]: log_missing_certs Mar 14 05:34:20 crc kubenswrapper[4817]: sleep 5 Mar 14 05:34:20 crc kubenswrapper[4817]: done Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 14 05:34:20 crc kubenswrapper[4817]: exec /usr/bin/kube-rbac-proxy \ Mar 14 05:34:20 crc kubenswrapper[4817]: --logtostderr \ Mar 14 05:34:20 crc kubenswrapper[4817]: --secure-listen-address=:9108 \ Mar 14 05:34:20 crc kubenswrapper[4817]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 14 05:34:20 crc kubenswrapper[4817]: --upstream=http://127.0.0.1:29108/ \ Mar 14 05:34:20 crc kubenswrapper[4817]: --tls-private-key-file=${TLS_PK} \ Mar 14 05:34:20 crc kubenswrapper[4817]: --tls-cert-file=${TLS_CERT} Mar 14 05:34:20 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-7gf8w_openshift-ovn-kubernetes(e636294e-01ac-40f2-a057-62894528f233): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:20 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:20 crc kubenswrapper[4817]: E0314 05:34:20.706829 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:20 crc kubenswrapper[4817]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:20 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:20 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v4_join_subnet_opt= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v6_join_subnet_opt= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v4_transit_switch_subnet_opt= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v6_transit_switch_subnet_opt= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: dns_name_resolver_enabled_flag= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "false" == "true" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: persistent_ips_enabled_flag= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "true" == "true" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: # This is needed so that converting clusters from GA to TP Mar 14 05:34:20 crc kubenswrapper[4817]: # will rollout control plane pods as well Mar 14 05:34:20 crc kubenswrapper[4817]: network_segmentation_enabled_flag= Mar 14 05:34:20 crc kubenswrapper[4817]: multi_network_enabled_flag= Mar 14 05:34:20 crc kubenswrapper[4817]: if [[ "true" == "true" ]]; then Mar 14 05:34:20 crc kubenswrapper[4817]: multi_network_enabled_flag="--enable-multi-network" Mar 14 05:34:20 crc kubenswrapper[4817]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 14 05:34:20 crc kubenswrapper[4817]: fi Mar 14 05:34:20 crc kubenswrapper[4817]: Mar 14 05:34:20 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 14 05:34:20 crc kubenswrapper[4817]: exec /usr/bin/ovnkube \ Mar 14 05:34:20 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:20 crc kubenswrapper[4817]: --init-cluster-manager "${K8S_NODE}" \ Mar 14 05:34:20 crc kubenswrapper[4817]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 14 05:34:20 crc kubenswrapper[4817]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 14 05:34:20 crc kubenswrapper[4817]: --metrics-bind-address "127.0.0.1:29108" \ Mar 14 05:34:20 crc kubenswrapper[4817]: --metrics-enable-pprof \ Mar 14 05:34:20 crc kubenswrapper[4817]: --metrics-enable-config-duration \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${ovn_v4_join_subnet_opt} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${ovn_v6_join_subnet_opt} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${dns_name_resolver_enabled_flag} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${persistent_ips_enabled_flag} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${multi_network_enabled_flag} \ Mar 14 05:34:20 crc kubenswrapper[4817]: ${network_segmentation_enabled_flag} Mar 14 05:34:20 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-7gf8w_openshift-ovn-kubernetes(e636294e-01ac-40f2-a057-62894528f233): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:20 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:20 crc kubenswrapper[4817]: E0314 05:34:20.709111 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" podUID="e636294e-01ac-40f2-a057-62894528f233" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.799817 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.799855 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.799865 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.799879 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.799889 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.903215 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.903270 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.903292 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.903317 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:20 crc kubenswrapper[4817]: I0314 05:34:20.903334 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:20Z","lastTransitionTime":"2026-03-14T05:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.006109 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.006144 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.006152 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.006167 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.006177 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.104970 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4lfsz"] Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.106281 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.106370 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.109541 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.109638 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.109660 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.109688 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.109716 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.124202 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.137340 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.147092 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpj2b\" (UniqueName: \"kubernetes.io/projected/aae80926-3fb7-4be8-80a0-25c27ee13a03-kube-api-access-fpj2b\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.147204 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.153042 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.173227 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.190642 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.205788 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.213996 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.214194 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.214327 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.214496 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.214641 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.233084 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.238654 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" event={"ID":"e636294e-01ac-40f2-a057-62894528f233","Type":"ContainerStarted","Data":"1b479446b0542624c7d7b00756c8c24c7bc3f4042f636a64ce1bdcc9d9719624"} Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.241560 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:21 crc kubenswrapper[4817]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:21 crc kubenswrapper[4817]: set -euo pipefail Mar 14 05:34:21 crc kubenswrapper[4817]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 14 05:34:21 crc kubenswrapper[4817]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 14 05:34:21 crc kubenswrapper[4817]: # As the secret mount is optional we must wait for the files to be present. Mar 14 05:34:21 crc kubenswrapper[4817]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 14 05:34:21 crc kubenswrapper[4817]: TS=$(date +%s) Mar 14 05:34:21 crc kubenswrapper[4817]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 14 05:34:21 crc kubenswrapper[4817]: HAS_LOGGED_INFO=0 Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: log_missing_certs(){ Mar 14 05:34:21 crc kubenswrapper[4817]: CUR_TS=$(date +%s) Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 14 05:34:21 crc kubenswrapper[4817]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 14 05:34:21 crc kubenswrapper[4817]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 14 05:34:21 crc kubenswrapper[4817]: HAS_LOGGED_INFO=1 Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: } Mar 14 05:34:21 crc kubenswrapper[4817]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 14 05:34:21 crc kubenswrapper[4817]: log_missing_certs Mar 14 05:34:21 crc kubenswrapper[4817]: sleep 5 Mar 14 05:34:21 crc kubenswrapper[4817]: done Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 14 05:34:21 crc kubenswrapper[4817]: exec /usr/bin/kube-rbac-proxy \ Mar 14 05:34:21 crc kubenswrapper[4817]: --logtostderr \ Mar 14 05:34:21 crc kubenswrapper[4817]: --secure-listen-address=:9108 \ Mar 14 05:34:21 crc kubenswrapper[4817]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 14 05:34:21 crc kubenswrapper[4817]: --upstream=http://127.0.0.1:29108/ \ Mar 14 05:34:21 crc kubenswrapper[4817]: --tls-private-key-file=${TLS_PK} \ Mar 14 05:34:21 crc kubenswrapper[4817]: --tls-cert-file=${TLS_CERT} Mar 14 05:34:21 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-7gf8w_openshift-ovn-kubernetes(e636294e-01ac-40f2-a057-62894528f233): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:21 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.245798 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:21 crc kubenswrapper[4817]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:21 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:21 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v4_join_subnet_opt= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v6_join_subnet_opt= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v4_transit_switch_subnet_opt= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v6_transit_switch_subnet_opt= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: dns_name_resolver_enabled_flag= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "false" == "true" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: persistent_ips_enabled_flag= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "true" == "true" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: # This is needed so that converting clusters from GA to TP Mar 14 05:34:21 crc kubenswrapper[4817]: # will rollout control plane pods as well Mar 14 05:34:21 crc kubenswrapper[4817]: network_segmentation_enabled_flag= Mar 14 05:34:21 crc kubenswrapper[4817]: multi_network_enabled_flag= Mar 14 05:34:21 crc kubenswrapper[4817]: if [[ "true" == "true" ]]; then Mar 14 05:34:21 crc kubenswrapper[4817]: multi_network_enabled_flag="--enable-multi-network" Mar 14 05:34:21 crc kubenswrapper[4817]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 14 05:34:21 crc kubenswrapper[4817]: fi Mar 14 05:34:21 crc kubenswrapper[4817]: Mar 14 05:34:21 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 14 05:34:21 crc kubenswrapper[4817]: exec /usr/bin/ovnkube \ Mar 14 05:34:21 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:21 crc kubenswrapper[4817]: --init-cluster-manager "${K8S_NODE}" \ Mar 14 05:34:21 crc kubenswrapper[4817]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 14 05:34:21 crc kubenswrapper[4817]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 14 05:34:21 crc kubenswrapper[4817]: --metrics-bind-address "127.0.0.1:29108" \ Mar 14 05:34:21 crc kubenswrapper[4817]: --metrics-enable-pprof \ Mar 14 05:34:21 crc kubenswrapper[4817]: --metrics-enable-config-duration \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${ovn_v4_join_subnet_opt} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${ovn_v6_join_subnet_opt} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${dns_name_resolver_enabled_flag} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${persistent_ips_enabled_flag} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${multi_network_enabled_flag} \ Mar 14 05:34:21 crc kubenswrapper[4817]: ${network_segmentation_enabled_flag} Mar 14 05:34:21 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-7gf8w_openshift-ovn-kubernetes(e636294e-01ac-40f2-a057-62894528f233): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:21 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.247029 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" podUID="e636294e-01ac-40f2-a057-62894528f233" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.248327 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.249469 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpj2b\" (UniqueName: \"kubernetes.io/projected/aae80926-3fb7-4be8-80a0-25c27ee13a03-kube-api-access-fpj2b\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.249558 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.249719 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.249780 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:21.749758174 +0000 UTC m=+115.788018960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.266667 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.278377 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpj2b\" (UniqueName: \"kubernetes.io/projected/aae80926-3fb7-4be8-80a0-25c27ee13a03-kube-api-access-fpj2b\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.281221 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.292434 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.306878 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.317542 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.317725 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.318005 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.318186 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.318320 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.320701 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.333455 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.343973 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.363201 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.371975 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.387795 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.398326 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.408669 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.417328 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.421003 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.421220 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.421348 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.421602 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.421763 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.425637 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.436089 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.448863 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.456975 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.465800 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.479813 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.495163 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.504565 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.511132 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.524708 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.524790 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.524815 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.524850 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.524873 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.628139 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.628213 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.628233 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.628259 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.628331 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.730787 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.731145 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.731158 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.731096 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.731257 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.731149 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.731351 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.731505 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.732074 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.732064 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.732326 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.755524 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.755738 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:21 crc kubenswrapper[4817]: E0314 05:34:21.755862 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:22.755827153 +0000 UTC m=+116.794087939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.834743 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.834824 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.834843 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.834867 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.834884 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.936952 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.936993 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.937005 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.937023 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:21 crc kubenswrapper[4817]: I0314 05:34:21.937035 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:21Z","lastTransitionTime":"2026-03-14T05:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.040025 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.040446 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.040507 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.040539 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.040564 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.143364 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.143437 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.143465 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.143495 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.143519 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.246133 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.246238 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.246305 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.246335 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.246370 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.350076 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.350148 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.350166 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.350194 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.350219 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.453275 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.453326 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.453338 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.453356 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.453367 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.556148 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.556251 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.556273 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.556304 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.556323 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.659804 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.659939 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.659965 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.659992 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.660010 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.731449 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:22 crc kubenswrapper[4817]: E0314 05:34:22.732004 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:22 crc kubenswrapper[4817]: E0314 05:34:22.734124 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:22 crc kubenswrapper[4817]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 14 05:34:22 crc kubenswrapper[4817]: apiVersion: v1 Mar 14 05:34:22 crc kubenswrapper[4817]: clusters: Mar 14 05:34:22 crc kubenswrapper[4817]: - cluster: Mar 14 05:34:22 crc kubenswrapper[4817]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 14 05:34:22 crc kubenswrapper[4817]: server: https://api-int.crc.testing:6443 Mar 14 05:34:22 crc kubenswrapper[4817]: name: default-cluster Mar 14 05:34:22 crc kubenswrapper[4817]: contexts: Mar 14 05:34:22 crc kubenswrapper[4817]: - context: Mar 14 05:34:22 crc kubenswrapper[4817]: cluster: default-cluster Mar 14 05:34:22 crc kubenswrapper[4817]: namespace: default Mar 14 05:34:22 crc kubenswrapper[4817]: user: default-auth Mar 14 05:34:22 crc kubenswrapper[4817]: name: default-context Mar 14 05:34:22 crc kubenswrapper[4817]: current-context: default-context Mar 14 05:34:22 crc kubenswrapper[4817]: kind: Config Mar 14 05:34:22 crc kubenswrapper[4817]: preferences: {} Mar 14 05:34:22 crc kubenswrapper[4817]: users: Mar 14 05:34:22 crc kubenswrapper[4817]: - name: default-auth Mar 14 05:34:22 crc kubenswrapper[4817]: user: Mar 14 05:34:22 crc kubenswrapper[4817]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 14 05:34:22 crc kubenswrapper[4817]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 14 05:34:22 crc kubenswrapper[4817]: EOF Mar 14 05:34:22 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh7fm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-tntn6_openshift-ovn-kubernetes(dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:22 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:22 crc kubenswrapper[4817]: E0314 05:34:22.735261 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.762282 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.762356 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.762373 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.762396 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.762420 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.768018 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:22 crc kubenswrapper[4817]: E0314 05:34:22.768173 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:22 crc kubenswrapper[4817]: E0314 05:34:22.768241 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:24.768223087 +0000 UTC m=+118.806483843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.865175 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.865225 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.865236 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.865251 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.865260 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.970126 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.970483 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.970623 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.970776 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:22 crc kubenswrapper[4817]: I0314 05:34:22.970982 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:22Z","lastTransitionTime":"2026-03-14T05:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.074085 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.074415 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.074502 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.074606 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.074689 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.178337 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.178617 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.178687 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.178752 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.178813 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.286853 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.286965 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.286988 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.287021 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.287051 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.390350 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.390440 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.390460 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.390489 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.390510 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.494516 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.494601 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.494624 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.494656 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.494679 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.597940 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.598282 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.598426 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.598594 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.598741 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.702068 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.702151 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.702176 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.702202 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.702218 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.731960 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.732289 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.732324 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:23 crc kubenswrapper[4817]: E0314 05:34:23.732729 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:23 crc kubenswrapper[4817]: E0314 05:34:23.732527 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:23 crc kubenswrapper[4817]: E0314 05:34:23.733141 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:23 crc kubenswrapper[4817]: E0314 05:34:23.734686 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:23 crc kubenswrapper[4817]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 14 05:34:23 crc kubenswrapper[4817]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 14 05:34:23 crc kubenswrapper[4817]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvwj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-wdf7p_openshift-multus(217c6f57-e799-4243-86ea-5b76c95c95ec): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:23 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:23 crc kubenswrapper[4817]: E0314 05:34:23.735956 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-wdf7p" podUID="217c6f57-e799-4243-86ea-5b76c95c95ec" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.805723 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.805794 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.805829 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.805863 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.805886 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.909715 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.909786 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.909805 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.909835 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:23 crc kubenswrapper[4817]: I0314 05:34:23.909854 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:23Z","lastTransitionTime":"2026-03-14T05:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.012947 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.013088 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.013117 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.013148 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.013171 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.117105 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.117171 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.117188 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.117212 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.117233 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.220836 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.220927 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.220947 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.220974 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.220992 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.324186 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.324975 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.325030 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.325062 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.325081 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.428110 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.428156 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.428165 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.428184 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.428198 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.531063 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.531147 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.531161 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.531182 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.531195 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.635156 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.635224 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.635243 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.635269 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.635289 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.731570 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:24 crc kubenswrapper[4817]: E0314 05:34:24.732110 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:24 crc kubenswrapper[4817]: E0314 05:34:24.734104 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:24 crc kubenswrapper[4817]: E0314 05:34:24.736341 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vtmnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:24 crc kubenswrapper[4817]: E0314 05:34:24.737514 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.737875 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.737973 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.737993 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.738023 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.738045 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.793939 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:24 crc kubenswrapper[4817]: E0314 05:34:24.794222 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:24 crc kubenswrapper[4817]: E0314 05:34:24.794339 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:28.794310024 +0000 UTC m=+122.832570810 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.840980 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.841049 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.841070 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.841099 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.841120 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.943067 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.943120 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.943137 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.943158 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:24 crc kubenswrapper[4817]: I0314 05:34:24.943175 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:24Z","lastTransitionTime":"2026-03-14T05:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.045853 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.046264 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.046418 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.046562 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.046714 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.149761 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.149837 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.149858 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.149932 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.149967 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.252429 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.252497 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.252508 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.252526 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.252538 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.356059 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.356166 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.356190 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.356224 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.356244 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.459660 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.459738 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.459756 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.459804 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.459823 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.538951 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.539017 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.539035 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.539060 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.539078 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.588825 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.594507 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.594589 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.594608 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.594634 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.594652 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.613615 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.620100 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.620169 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.620187 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.620213 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.620230 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.638021 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.646159 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.646201 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.646212 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.646232 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.646247 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.657349 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.663882 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.664001 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.664031 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.664070 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.664094 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.684555 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.684956 4817 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.687351 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.687439 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.687462 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.687493 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.687518 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.731041 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.731095 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.731053 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.731235 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.731329 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:25 crc kubenswrapper[4817]: E0314 05:34:25.731406 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.790434 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.790469 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.790477 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.790490 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.790499 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.893177 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.893244 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.893264 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.893291 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.893310 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.996526 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.996595 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.996614 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.996643 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:25 crc kubenswrapper[4817]: I0314 05:34:25.996668 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:25Z","lastTransitionTime":"2026-03-14T05:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.100777 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.100862 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.100887 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.100953 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.100977 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:26Z","lastTransitionTime":"2026-03-14T05:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.203300 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.203383 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.203405 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.203437 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.203457 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:26Z","lastTransitionTime":"2026-03-14T05:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.306399 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.306482 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.306497 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.306524 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.306543 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:26Z","lastTransitionTime":"2026-03-14T05:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.410659 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.410753 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.410774 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.410805 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.410938 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:26Z","lastTransitionTime":"2026-03-14T05:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.514206 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.514280 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.514299 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.514326 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.514349 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:26Z","lastTransitionTime":"2026-03-14T05:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.618320 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.618370 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.618381 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.618399 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.618409 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:26Z","lastTransitionTime":"2026-03-14T05:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:26 crc kubenswrapper[4817]: E0314 05:34:26.719534 4817 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.731541 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:26 crc kubenswrapper[4817]: E0314 05:34:26.731692 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.750970 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.765011 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.779151 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.793518 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.808696 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.821130 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.835216 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.851459 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.868987 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.887251 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.919937 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.933263 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.945559 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.957436 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:26 crc kubenswrapper[4817]: I0314 05:34:26.968377 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:27 crc kubenswrapper[4817]: E0314 05:34:27.082351 4817 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 05:34:27 crc kubenswrapper[4817]: I0314 05:34:27.731862 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:27 crc kubenswrapper[4817]: I0314 05:34:27.731972 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:27 crc kubenswrapper[4817]: E0314 05:34:27.732066 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:27 crc kubenswrapper[4817]: E0314 05:34:27.732203 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:27 crc kubenswrapper[4817]: I0314 05:34:27.732796 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:27 crc kubenswrapper[4817]: E0314 05:34:27.733064 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:28 crc kubenswrapper[4817]: I0314 05:34:28.731353 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:28 crc kubenswrapper[4817]: E0314 05:34:28.731737 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:28 crc kubenswrapper[4817]: E0314 05:34:28.734166 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:28 crc kubenswrapper[4817]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 14 05:34:28 crc kubenswrapper[4817]: while [ true ]; Mar 14 05:34:28 crc kubenswrapper[4817]: do Mar 14 05:34:28 crc kubenswrapper[4817]: for f in $(ls /tmp/serviceca); do Mar 14 05:34:28 crc kubenswrapper[4817]: echo $f Mar 14 05:34:28 crc kubenswrapper[4817]: ca_file_path="/tmp/serviceca/${f}" Mar 14 05:34:28 crc kubenswrapper[4817]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 14 05:34:28 crc kubenswrapper[4817]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 14 05:34:28 crc kubenswrapper[4817]: if [ -e "${reg_dir_path}" ]; then Mar 14 05:34:28 crc kubenswrapper[4817]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 14 05:34:28 crc kubenswrapper[4817]: else Mar 14 05:34:28 crc kubenswrapper[4817]: mkdir $reg_dir_path Mar 14 05:34:28 crc kubenswrapper[4817]: cp $ca_file_path $reg_dir_path/ca.crt Mar 14 05:34:28 crc kubenswrapper[4817]: fi Mar 14 05:34:28 crc kubenswrapper[4817]: done Mar 14 05:34:28 crc kubenswrapper[4817]: for d in $(ls /etc/docker/certs.d); do Mar 14 05:34:28 crc kubenswrapper[4817]: echo $d Mar 14 05:34:28 crc kubenswrapper[4817]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 14 05:34:28 crc kubenswrapper[4817]: reg_conf_path="/tmp/serviceca/${dp}" Mar 14 05:34:28 crc kubenswrapper[4817]: if [ ! -e "${reg_conf_path}" ]; then Mar 14 05:34:28 crc kubenswrapper[4817]: rm -rf /etc/docker/certs.d/$d Mar 14 05:34:28 crc kubenswrapper[4817]: fi Mar 14 05:34:28 crc kubenswrapper[4817]: done Mar 14 05:34:28 crc kubenswrapper[4817]: sleep 60 & wait ${!} Mar 14 05:34:28 crc kubenswrapper[4817]: done Mar 14 05:34:28 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22qrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-plxwm_openshift-image-registry(6b55c361-074b-4eec-a066-14d7767cbad2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:28 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:28 crc kubenswrapper[4817]: E0314 05:34:28.735273 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-plxwm" podUID="6b55c361-074b-4eec-a066-14d7767cbad2" Mar 14 05:34:28 crc kubenswrapper[4817]: I0314 05:34:28.847168 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:28 crc kubenswrapper[4817]: E0314 05:34:28.847478 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:28 crc kubenswrapper[4817]: E0314 05:34:28.847627 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:36.847591909 +0000 UTC m=+130.885852665 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:29 crc kubenswrapper[4817]: I0314 05:34:29.731864 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:29 crc kubenswrapper[4817]: I0314 05:34:29.731947 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:29 crc kubenswrapper[4817]: I0314 05:34:29.732012 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:29 crc kubenswrapper[4817]: E0314 05:34:29.732200 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:29 crc kubenswrapper[4817]: E0314 05:34:29.732536 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:29 crc kubenswrapper[4817]: E0314 05:34:29.732725 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:29 crc kubenswrapper[4817]: E0314 05:34:29.734693 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:29 crc kubenswrapper[4817]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:29 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:29 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:29 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:29 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:29 crc kubenswrapper[4817]: fi Mar 14 05:34:29 crc kubenswrapper[4817]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 05:34:29 crc kubenswrapper[4817]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 05:34:29 crc kubenswrapper[4817]: ho_enable="--enable-hybrid-overlay" Mar 14 05:34:29 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 05:34:29 crc kubenswrapper[4817]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 05:34:29 crc kubenswrapper[4817]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 05:34:29 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:29 crc kubenswrapper[4817]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 05:34:29 crc kubenswrapper[4817]: --webhook-host=127.0.0.1 \ Mar 14 05:34:29 crc kubenswrapper[4817]: --webhook-port=9743 \ Mar 14 05:34:29 crc kubenswrapper[4817]: ${ho_enable} \ Mar 14 05:34:29 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:29 crc kubenswrapper[4817]: --disable-approver \ Mar 14 05:34:29 crc kubenswrapper[4817]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 05:34:29 crc kubenswrapper[4817]: --wait-for-kubernetes-api=200s \ Mar 14 05:34:29 crc kubenswrapper[4817]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 05:34:29 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:29 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:29 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:29 crc kubenswrapper[4817]: E0314 05:34:29.737368 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:29 crc kubenswrapper[4817]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:29 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:29 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:29 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:29 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:29 crc kubenswrapper[4817]: fi Mar 14 05:34:29 crc kubenswrapper[4817]: Mar 14 05:34:29 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 05:34:29 crc kubenswrapper[4817]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 05:34:29 crc kubenswrapper[4817]: --disable-webhook \ Mar 14 05:34:29 crc kubenswrapper[4817]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 05:34:29 crc kubenswrapper[4817]: --loglevel="${LOGLEVEL}" Mar 14 05:34:29 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:29 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:29 crc kubenswrapper[4817]: E0314 05:34:29.738545 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 05:34:30 crc kubenswrapper[4817]: I0314 05:34:30.731567 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:30 crc kubenswrapper[4817]: E0314 05:34:30.731985 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:30 crc kubenswrapper[4817]: E0314 05:34:30.734564 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zgs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-jlnmq_openshift-multus(44e2523e-6f4b-475e-b733-a45e3744f774): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:30 crc kubenswrapper[4817]: E0314 05:34:30.735333 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:30 crc kubenswrapper[4817]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:30 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:30 crc kubenswrapper[4817]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 05:34:30 crc kubenswrapper[4817]: source /etc/kubernetes/apiserver-url.env Mar 14 05:34:30 crc kubenswrapper[4817]: else Mar 14 05:34:30 crc kubenswrapper[4817]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 05:34:30 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:30 crc kubenswrapper[4817]: fi Mar 14 05:34:30 crc kubenswrapper[4817]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 05:34:30 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:30 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:30 crc kubenswrapper[4817]: E0314 05:34:30.735746 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" podUID="44e2523e-6f4b-475e-b733-a45e3744f774" Mar 14 05:34:30 crc kubenswrapper[4817]: E0314 05:34:30.737185 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 05:34:31 crc kubenswrapper[4817]: I0314 05:34:31.731308 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:31 crc kubenswrapper[4817]: I0314 05:34:31.731746 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:31 crc kubenswrapper[4817]: I0314 05:34:31.731827 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:31 crc kubenswrapper[4817]: E0314 05:34:31.732090 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:31 crc kubenswrapper[4817]: E0314 05:34:31.732401 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:31 crc kubenswrapper[4817]: E0314 05:34:31.732638 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:31 crc kubenswrapper[4817]: E0314 05:34:31.733736 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 05:34:31 crc kubenswrapper[4817]: E0314 05:34:31.734984 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 05:34:32 crc kubenswrapper[4817]: E0314 05:34:32.084516 4817 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 05:34:32 crc kubenswrapper[4817]: I0314 05:34:32.731414 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:32 crc kubenswrapper[4817]: E0314 05:34:32.731579 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:32 crc kubenswrapper[4817]: E0314 05:34:32.733960 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:32 crc kubenswrapper[4817]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:32 crc kubenswrapper[4817]: set -euo pipefail Mar 14 05:34:32 crc kubenswrapper[4817]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 14 05:34:32 crc kubenswrapper[4817]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 14 05:34:32 crc kubenswrapper[4817]: # As the secret mount is optional we must wait for the files to be present. Mar 14 05:34:32 crc kubenswrapper[4817]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 14 05:34:32 crc kubenswrapper[4817]: TS=$(date +%s) Mar 14 05:34:32 crc kubenswrapper[4817]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 14 05:34:32 crc kubenswrapper[4817]: HAS_LOGGED_INFO=0 Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: log_missing_certs(){ Mar 14 05:34:32 crc kubenswrapper[4817]: CUR_TS=$(date +%s) Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 14 05:34:32 crc kubenswrapper[4817]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 14 05:34:32 crc kubenswrapper[4817]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 14 05:34:32 crc kubenswrapper[4817]: HAS_LOGGED_INFO=1 Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: } Mar 14 05:34:32 crc kubenswrapper[4817]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 14 05:34:32 crc kubenswrapper[4817]: log_missing_certs Mar 14 05:34:32 crc kubenswrapper[4817]: sleep 5 Mar 14 05:34:32 crc kubenswrapper[4817]: done Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 14 05:34:32 crc kubenswrapper[4817]: exec /usr/bin/kube-rbac-proxy \ Mar 14 05:34:32 crc kubenswrapper[4817]: --logtostderr \ Mar 14 05:34:32 crc kubenswrapper[4817]: --secure-listen-address=:9108 \ Mar 14 05:34:32 crc kubenswrapper[4817]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 14 05:34:32 crc kubenswrapper[4817]: --upstream=http://127.0.0.1:29108/ \ Mar 14 05:34:32 crc kubenswrapper[4817]: --tls-private-key-file=${TLS_PK} \ Mar 14 05:34:32 crc kubenswrapper[4817]: --tls-cert-file=${TLS_CERT} Mar 14 05:34:32 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-7gf8w_openshift-ovn-kubernetes(e636294e-01ac-40f2-a057-62894528f233): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:32 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:32 crc kubenswrapper[4817]: E0314 05:34:32.736130 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:32 crc kubenswrapper[4817]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ -f "/env/_master" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: set -o allexport Mar 14 05:34:32 crc kubenswrapper[4817]: source "/env/_master" Mar 14 05:34:32 crc kubenswrapper[4817]: set +o allexport Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v4_join_subnet_opt= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v6_join_subnet_opt= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v4_transit_switch_subnet_opt= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v6_transit_switch_subnet_opt= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "" != "" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: dns_name_resolver_enabled_flag= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "false" == "true" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: persistent_ips_enabled_flag= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "true" == "true" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: # This is needed so that converting clusters from GA to TP Mar 14 05:34:32 crc kubenswrapper[4817]: # will rollout control plane pods as well Mar 14 05:34:32 crc kubenswrapper[4817]: network_segmentation_enabled_flag= Mar 14 05:34:32 crc kubenswrapper[4817]: multi_network_enabled_flag= Mar 14 05:34:32 crc kubenswrapper[4817]: if [[ "true" == "true" ]]; then Mar 14 05:34:32 crc kubenswrapper[4817]: multi_network_enabled_flag="--enable-multi-network" Mar 14 05:34:32 crc kubenswrapper[4817]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 14 05:34:32 crc kubenswrapper[4817]: fi Mar 14 05:34:32 crc kubenswrapper[4817]: Mar 14 05:34:32 crc kubenswrapper[4817]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 14 05:34:32 crc kubenswrapper[4817]: exec /usr/bin/ovnkube \ Mar 14 05:34:32 crc kubenswrapper[4817]: --enable-interconnect \ Mar 14 05:34:32 crc kubenswrapper[4817]: --init-cluster-manager "${K8S_NODE}" \ Mar 14 05:34:32 crc kubenswrapper[4817]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 14 05:34:32 crc kubenswrapper[4817]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 14 05:34:32 crc kubenswrapper[4817]: --metrics-bind-address "127.0.0.1:29108" \ Mar 14 05:34:32 crc kubenswrapper[4817]: --metrics-enable-pprof \ Mar 14 05:34:32 crc kubenswrapper[4817]: --metrics-enable-config-duration \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${ovn_v4_join_subnet_opt} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${ovn_v6_join_subnet_opt} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${dns_name_resolver_enabled_flag} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${persistent_ips_enabled_flag} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${multi_network_enabled_flag} \ Mar 14 05:34:32 crc kubenswrapper[4817]: ${network_segmentation_enabled_flag} Mar 14 05:34:32 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-7gf8w_openshift-ovn-kubernetes(e636294e-01ac-40f2-a057-62894528f233): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:32 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:32 crc kubenswrapper[4817]: E0314 05:34:32.737297 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" podUID="e636294e-01ac-40f2-a057-62894528f233" Mar 14 05:34:33 crc kubenswrapper[4817]: I0314 05:34:33.731329 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:33 crc kubenswrapper[4817]: I0314 05:34:33.732052 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:33 crc kubenswrapper[4817]: I0314 05:34:33.731333 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:33 crc kubenswrapper[4817]: E0314 05:34:33.732307 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:33 crc kubenswrapper[4817]: E0314 05:34:33.732388 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:33 crc kubenswrapper[4817]: E0314 05:34:33.732464 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:34 crc kubenswrapper[4817]: I0314 05:34:34.731337 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:34 crc kubenswrapper[4817]: E0314 05:34:34.731482 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:34 crc kubenswrapper[4817]: E0314 05:34:34.733258 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:34:34 crc kubenswrapper[4817]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 14 05:34:34 crc kubenswrapper[4817]: set -uo pipefail Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 14 05:34:34 crc kubenswrapper[4817]: HOSTS_FILE="/etc/hosts" Mar 14 05:34:34 crc kubenswrapper[4817]: TEMP_FILE="/etc/hosts.tmp" Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: # Make a temporary file with the old hosts file's attributes. Mar 14 05:34:34 crc kubenswrapper[4817]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 14 05:34:34 crc kubenswrapper[4817]: echo "Failed to preserve hosts file. Exiting." Mar 14 05:34:34 crc kubenswrapper[4817]: exit 1 Mar 14 05:34:34 crc kubenswrapper[4817]: fi Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: while true; do Mar 14 05:34:34 crc kubenswrapper[4817]: declare -A svc_ips Mar 14 05:34:34 crc kubenswrapper[4817]: for svc in "${services[@]}"; do Mar 14 05:34:34 crc kubenswrapper[4817]: # Fetch service IP from cluster dns if present. We make several tries Mar 14 05:34:34 crc kubenswrapper[4817]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 14 05:34:34 crc kubenswrapper[4817]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 14 05:34:34 crc kubenswrapper[4817]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 14 05:34:34 crc kubenswrapper[4817]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:34 crc kubenswrapper[4817]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:34 crc kubenswrapper[4817]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 14 05:34:34 crc kubenswrapper[4817]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 14 05:34:34 crc kubenswrapper[4817]: for i in ${!cmds[*]} Mar 14 05:34:34 crc kubenswrapper[4817]: do Mar 14 05:34:34 crc kubenswrapper[4817]: ips=($(eval "${cmds[i]}")) Mar 14 05:34:34 crc kubenswrapper[4817]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 14 05:34:34 crc kubenswrapper[4817]: svc_ips["${svc}"]="${ips[@]}" Mar 14 05:34:34 crc kubenswrapper[4817]: break Mar 14 05:34:34 crc kubenswrapper[4817]: fi Mar 14 05:34:34 crc kubenswrapper[4817]: done Mar 14 05:34:34 crc kubenswrapper[4817]: done Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: # Update /etc/hosts only if we get valid service IPs Mar 14 05:34:34 crc kubenswrapper[4817]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 14 05:34:34 crc kubenswrapper[4817]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 14 05:34:34 crc kubenswrapper[4817]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 14 05:34:34 crc kubenswrapper[4817]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 14 05:34:34 crc kubenswrapper[4817]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 14 05:34:34 crc kubenswrapper[4817]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 14 05:34:34 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:34 crc kubenswrapper[4817]: continue Mar 14 05:34:34 crc kubenswrapper[4817]: fi Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: # Append resolver entries for services Mar 14 05:34:34 crc kubenswrapper[4817]: rc=0 Mar 14 05:34:34 crc kubenswrapper[4817]: for svc in "${!svc_ips[@]}"; do Mar 14 05:34:34 crc kubenswrapper[4817]: for ip in ${svc_ips[${svc}]}; do Mar 14 05:34:34 crc kubenswrapper[4817]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 14 05:34:34 crc kubenswrapper[4817]: done Mar 14 05:34:34 crc kubenswrapper[4817]: done Mar 14 05:34:34 crc kubenswrapper[4817]: if [[ $rc -ne 0 ]]; then Mar 14 05:34:34 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:34 crc kubenswrapper[4817]: continue Mar 14 05:34:34 crc kubenswrapper[4817]: fi Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: Mar 14 05:34:34 crc kubenswrapper[4817]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 14 05:34:34 crc kubenswrapper[4817]: # Replace /etc/hosts with our modified version if needed Mar 14 05:34:34 crc kubenswrapper[4817]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 14 05:34:34 crc kubenswrapper[4817]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 14 05:34:34 crc kubenswrapper[4817]: fi Mar 14 05:34:34 crc kubenswrapper[4817]: sleep 60 & wait Mar 14 05:34:34 crc kubenswrapper[4817]: unset svc_ips Mar 14 05:34:34 crc kubenswrapper[4817]: done Mar 14 05:34:34 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvlks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-v2gnk_openshift-dns(7626bd0c-9420-4e61-98b0-00e4c9eb21f2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 05:34:34 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:34:34 crc kubenswrapper[4817]: E0314 05:34:34.734349 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-v2gnk" podUID="7626bd0c-9420-4e61-98b0-00e4c9eb21f2" Mar 14 05:34:34 crc kubenswrapper[4817]: I0314 05:34:34.741142 4817 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.528805 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.529106 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.529061421 +0000 UTC m=+161.567322197 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.529425 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.529696 4817 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.529840 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.529806282 +0000 UTC m=+161.568067058 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.631397 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.631488 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.631529 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.631758 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.631784 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.631804 4817 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.631876 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.631851828 +0000 UTC m=+161.670112604 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.632542 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.632566 4817 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.632582 4817 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.632629 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.63261374 +0000 UTC m=+161.670874516 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.632683 4817 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.632719 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.632707973 +0000 UTC m=+161.670968759 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.713170 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.713248 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.713275 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.713305 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.713330 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:35Z","lastTransitionTime":"2026-03-14T05:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.732253 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.732480 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.732646 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.732664 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.733845 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.736730 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.744831 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.752473 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.752518 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.752535 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.752567 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.752589 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:35Z","lastTransitionTime":"2026-03-14T05:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.753933 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.768588 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.772805 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.772849 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.772861 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.772880 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.772910 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:35Z","lastTransitionTime":"2026-03-14T05:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.786544 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.791349 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.791408 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.791424 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.791700 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.791743 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:35Z","lastTransitionTime":"2026-03-14T05:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.803355 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.807120 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.807168 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.807180 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.807197 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:35 crc kubenswrapper[4817]: I0314 05:34:35.807209 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:35Z","lastTransitionTime":"2026-03-14T05:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.818824 4817 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"63ac787c-19bc-4f4c-91a6-5792ebe52e66\\\",\\\"systemUUID\\\":\\\"7d31aad3-6adc-4cbd-bc39-029dc91df933\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:35 crc kubenswrapper[4817]: E0314 05:34:35.819241 4817 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.285968 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66"} Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.286405 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda"} Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.288771 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" exitCode=0 Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.288887 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.309532 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.327623 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.343755 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.376268 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.390529 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.407722 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.426951 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.437763 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.453093 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.471384 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.485542 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.499581 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.512594 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.524429 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.538613 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.551299 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.563494 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.575962 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.586476 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.602522 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.610303 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.618764 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.627306 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.635881 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.646105 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.654792 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.669667 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.679274 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.695978 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.708204 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.722261 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.731054 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:36 crc kubenswrapper[4817]: E0314 05:34:36.731249 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.736313 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.746466 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.757571 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.775327 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.784565 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.796541 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.803036 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.811451 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.821501 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.830394 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.839323 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.852028 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.867356 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.873993 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.881330 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.887651 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.893087 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:36 crc kubenswrapper[4817]: I0314 05:34:36.946790 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:36 crc kubenswrapper[4817]: E0314 05:34:36.946998 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:36 crc kubenswrapper[4817]: E0314 05:34:36.947094 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:34:52.947073653 +0000 UTC m=+146.985334399 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:37 crc kubenswrapper[4817]: E0314 05:34:37.085143 4817 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.294066 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.294140 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.294153 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.294164 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.731506 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:37 crc kubenswrapper[4817]: E0314 05:34:37.732028 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.731635 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:37 crc kubenswrapper[4817]: E0314 05:34:37.732130 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:37 crc kubenswrapper[4817]: I0314 05:34:37.731577 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:37 crc kubenswrapper[4817]: E0314 05:34:37.732199 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.300193 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdf7p" event={"ID":"217c6f57-e799-4243-86ea-5b76c95c95ec","Type":"ContainerStarted","Data":"5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292"} Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.304976 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.305017 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.319930 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.334451 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.348398 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.360684 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.374420 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.390674 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.402995 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.421548 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.431635 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.442565 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.452569 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.467469 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.487577 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.502840 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.512685 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.523047 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:38 crc kubenswrapper[4817]: I0314 05:34:38.731691 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:38 crc kubenswrapper[4817]: E0314 05:34:38.731887 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:39 crc kubenswrapper[4817]: I0314 05:34:39.730880 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:39 crc kubenswrapper[4817]: E0314 05:34:39.731099 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:39 crc kubenswrapper[4817]: I0314 05:34:39.731150 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:39 crc kubenswrapper[4817]: I0314 05:34:39.731195 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:39 crc kubenswrapper[4817]: E0314 05:34:39.731273 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:39 crc kubenswrapper[4817]: E0314 05:34:39.731430 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:40 crc kubenswrapper[4817]: I0314 05:34:40.314221 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} Mar 14 05:34:40 crc kubenswrapper[4817]: I0314 05:34:40.731732 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:40 crc kubenswrapper[4817]: E0314 05:34:40.731923 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.319450 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-plxwm" event={"ID":"6b55c361-074b-4eec-a066-14d7767cbad2","Type":"ContainerStarted","Data":"21777905d95098759f6d7b623863af0ad199df110fcd5427be40c220776b413d"} Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.331571 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.344402 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.351541 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.360201 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.369025 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.380996 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.396261 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.409220 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.417878 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.427983 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.445771 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.460115 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.474146 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.483338 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.540897 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.558429 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21777905d95098759f6d7b623863af0ad199df110fcd5427be40c220776b413d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.731693 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:41 crc kubenswrapper[4817]: E0314 05:34:41.732131 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.732146 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:41 crc kubenswrapper[4817]: E0314 05:34:41.732697 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:41 crc kubenswrapper[4817]: I0314 05:34:41.732197 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:41 crc kubenswrapper[4817]: E0314 05:34:41.732867 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:42 crc kubenswrapper[4817]: E0314 05:34:42.086537 4817 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.330033 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerStarted","Data":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.330080 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.330103 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.330155 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.331806 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aed6c48b4f74d5dc51dacb990bf15f5bf88633f61d502c9c42a371c61a853655"} Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.341743 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.349025 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.360261 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.371385 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.378981 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.396148 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.397171 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.397494 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.415961 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.424958 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21777905d95098759f6d7b623863af0ad199df110fcd5427be40c220776b413d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.441541 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.454741 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.465455 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.482011 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.496116 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.508779 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.522629 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.535265 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.549270 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.562231 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.575302 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.590991 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.601167 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.612543 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.631372 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.640213 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21777905d95098759f6d7b623863af0ad199df110fcd5427be40c220776b413d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.658433 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.667624 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aae80926-3fb7-4be8-80a0-25c27ee13a03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpj2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4lfsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.679715 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.688424 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.701060 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676c3e1e-370b-4a49-80c6-27422d2d1d56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2c5d35529dcf41d900165b0f60a064c4eb04161a91bff0758cba75aa847eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtmnv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f8hwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.712195 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e636294e-01ac-40f2-a057-62894528f233\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcgzd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7gf8w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.723951 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed6c48b4f74d5dc51dacb990bf15f5bf88633f61d502c9c42a371c61a853655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.731505 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:42 crc kubenswrapper[4817]: E0314 05:34:42.731628 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:42 crc kubenswrapper[4817]: I0314 05:34:42.736353 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:43 crc kubenswrapper[4817]: I0314 05:34:43.731787 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:43 crc kubenswrapper[4817]: I0314 05:34:43.731821 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:43 crc kubenswrapper[4817]: I0314 05:34:43.731878 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:43 crc kubenswrapper[4817]: E0314 05:34:43.732006 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:43 crc kubenswrapper[4817]: E0314 05:34:43.732133 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:43 crc kubenswrapper[4817]: E0314 05:34:43.732504 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.338177 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c30265b83fe60acb792c718013416419449293b17e262ad344d3675041958382"} Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.338584 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7ec48243d9b3da2010841ea773be7a7ce469eb249d197d3be944231748148952"} Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.340039 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" event={"ID":"e636294e-01ac-40f2-a057-62894528f233","Type":"ContainerStarted","Data":"92458d748ecd4e127528e975090d670cb16e648ddb12565cb4e2eb5c46a45d49"} Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.340067 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" event={"ID":"e636294e-01ac-40f2-a057-62894528f233","Type":"ContainerStarted","Data":"8dfd90eea73aa06b99bda8232406c10613d19c3f99a55efa19af1bd0c6f84233"} Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.353745 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30265b83fe60acb792c718013416419449293b17e262ad344d3675041958382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ec48243d9b3da2010841ea773be7a7ce469eb249d197d3be944231748148952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.363121 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v2gnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7626bd0c-9420-4e61-98b0-00e4c9eb21f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvlks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v2gnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.376485 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdf7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"217c6f57-e799-4243-86ea-5b76c95c95ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvwj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdf7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.393026 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e2523e-6f4b-475e-b733-a45e3744f774\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zgs6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlnmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.407694 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T05:33:18Z\\\",\\\"message\\\":\\\"W0314 05:33:18.021724 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 05:33:18.022130 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773466398 cert, and key in /tmp/serving-cert-1618462458/serving-signer.crt, /tmp/serving-cert-1618462458/serving-signer.key\\\\nI0314 05:33:18.472396 1 observer_polling.go:159] Starting file observer\\\\nW0314 05:33:18.484117 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 05:33:18.484332 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 05:33:18.485593 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1618462458/tls.crt::/tmp/serving-cert-1618462458/tls.key\\\\\\\"\\\\nF0314 05:33:18.799768 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T05:33:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.417938 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f55b794c-600b-4525-9e9d-7bad6f3afff2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae4e8f50d209e206f4dcf593060150e436b4f8d3c87ff520977e0fad40b6da1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79a26b216739fbce0b68d2e37cb9b00c3826d98f349afff929d25ce115d9523\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376ca3a655eaf0fbe9a5825704913a54715b63abbcc17429273674c08a114e4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:32:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f42a05444b7c05fc5eb0ceed0ff5f5cf3831f3375c6935b5f98da2a47d2d274\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:32:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:32:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:32:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.426033 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.439633 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T05:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T05:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tntn6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.447076 4817 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-plxwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b55c361-074b-4eec-a066-14d7767cbad2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T05:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21777905d95098759f6d7b623863af0ad199df110fcd5427be40c220776b413d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T05:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22qrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T05:34:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-plxwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.542636 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podStartSLOduration=74.542613142 podStartE2EDuration="1m14.542613142s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.542410266 +0000 UTC m=+138.580671012" watchObservedRunningTime="2026-03-14 05:34:44.542613142 +0000 UTC m=+138.580873888" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.574373 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=38.574346281 podStartE2EDuration="38.574346281s" podCreationTimestamp="2026-03-14 05:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.574265469 +0000 UTC m=+138.612526245" watchObservedRunningTime="2026-03-14 05:34:44.574346281 +0000 UTC m=+138.612607027" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.587838 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=9.587811821 podStartE2EDuration="9.587811821s" podCreationTimestamp="2026-03-14 05:34:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.587755379 +0000 UTC m=+138.626016125" watchObservedRunningTime="2026-03-14 05:34:44.587811821 +0000 UTC m=+138.626072567" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.631281 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podStartSLOduration=74.63125754000001 podStartE2EDuration="1m14.63125754s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.630149667 +0000 UTC m=+138.668410423" watchObservedRunningTime="2026-03-14 05:34:44.63125754 +0000 UTC m=+138.669518306" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.713041 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-plxwm" podStartSLOduration=74.713017308 podStartE2EDuration="1m14.713017308s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.699649421 +0000 UTC m=+138.737910167" watchObservedRunningTime="2026-03-14 05:34:44.713017308 +0000 UTC m=+138.751278064" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.728382 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7gf8w" podStartSLOduration=74.728364202 podStartE2EDuration="1m14.728364202s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.714443749 +0000 UTC m=+138.752704495" watchObservedRunningTime="2026-03-14 05:34:44.728364202 +0000 UTC m=+138.766624948" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.730989 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:44 crc kubenswrapper[4817]: E0314 05:34:44.731094 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:44 crc kubenswrapper[4817]: I0314 05:34:44.754142 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wdf7p" podStartSLOduration=74.754121828 podStartE2EDuration="1m14.754121828s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:44.75349621 +0000 UTC m=+138.791756956" watchObservedRunningTime="2026-03-14 05:34:44.754121828 +0000 UTC m=+138.792382564" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.344183 4817 generic.go:334] "Generic (PLEG): container finished" podID="44e2523e-6f4b-475e-b733-a45e3744f774" containerID="5e4d25ea1d8b9511eb4be365930a8d5759750f07f6150e6c5b8c04774e2bfe26" exitCode=0 Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.344233 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerDied","Data":"5e4d25ea1d8b9511eb4be365930a8d5759750f07f6150e6c5b8c04774e2bfe26"} Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.589874 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4lfsz"] Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.590009 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:45 crc kubenswrapper[4817]: E0314 05:34:45.590098 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.731369 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.731409 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.731589 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:45 crc kubenswrapper[4817]: E0314 05:34:45.731579 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:45 crc kubenswrapper[4817]: E0314 05:34:45.731695 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:45 crc kubenswrapper[4817]: E0314 05:34:45.731773 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.829296 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.829361 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.829379 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.829406 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.829424 4817 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T05:34:45Z","lastTransitionTime":"2026-03-14T05:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.875716 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz"] Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.876385 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.878489 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.878772 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.879524 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 05:34:45 crc kubenswrapper[4817]: I0314 05:34:45.879952 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.027558 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83f354fd-3b4b-497d-9ad9-f32168c7297f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.027637 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83f354fd-3b4b-497d-9ad9-f32168c7297f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.027674 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83f354fd-3b4b-497d-9ad9-f32168c7297f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.027714 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83f354fd-3b4b-497d-9ad9-f32168c7297f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.027761 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83f354fd-3b4b-497d-9ad9-f32168c7297f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.129254 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83f354fd-3b4b-497d-9ad9-f32168c7297f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.129500 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83f354fd-3b4b-497d-9ad9-f32168c7297f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.129742 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83f354fd-3b4b-497d-9ad9-f32168c7297f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.129930 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83f354fd-3b4b-497d-9ad9-f32168c7297f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.130024 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83f354fd-3b4b-497d-9ad9-f32168c7297f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.130159 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83f354fd-3b4b-497d-9ad9-f32168c7297f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.130214 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83f354fd-3b4b-497d-9ad9-f32168c7297f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.131841 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83f354fd-3b4b-497d-9ad9-f32168c7297f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.138061 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83f354fd-3b4b-497d-9ad9-f32168c7297f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.150307 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83f354fd-3b4b-497d-9ad9-f32168c7297f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dnrrz\" (UID: \"83f354fd-3b4b-497d-9ad9-f32168c7297f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.190104 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" Mar 14 05:34:46 crc kubenswrapper[4817]: W0314 05:34:46.203607 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f354fd_3b4b_497d_9ad9_f32168c7297f.slice/crio-3e8727f30984dc8223abade092222b16341b17109506620a8f76a640d84e8254 WatchSource:0}: Error finding container 3e8727f30984dc8223abade092222b16341b17109506620a8f76a640d84e8254: Status 404 returned error can't find the container with id 3e8727f30984dc8223abade092222b16341b17109506620a8f76a640d84e8254 Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.359648 4817 generic.go:334] "Generic (PLEG): container finished" podID="44e2523e-6f4b-475e-b733-a45e3744f774" containerID="7da8f694b645e16974101ac8edecf84351a08706420ee1957556c76f4f6b782a" exitCode=0 Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.359746 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerDied","Data":"7da8f694b645e16974101ac8edecf84351a08706420ee1957556c76f4f6b782a"} Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.361402 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" event={"ID":"83f354fd-3b4b-497d-9ad9-f32168c7297f","Type":"ContainerStarted","Data":"cc6539421cf96d087fdc97f951f5023eacb45f16ba9806bf4eeec7c97f47afb1"} Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.361453 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" event={"ID":"83f354fd-3b4b-497d-9ad9-f32168c7297f","Type":"ContainerStarted","Data":"3e8727f30984dc8223abade092222b16341b17109506620a8f76a640d84e8254"} Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.406447 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dnrrz" podStartSLOduration=76.406421238 podStartE2EDuration="1m16.406421238s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:46.405953795 +0000 UTC m=+140.444214541" watchObservedRunningTime="2026-03-14 05:34:46.406421238 +0000 UTC m=+140.444681984" Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.739934 4817 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 14 05:34:46 crc kubenswrapper[4817]: I0314 05:34:46.749265 4817 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 05:34:47 crc kubenswrapper[4817]: E0314 05:34:47.087487 4817 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.368510 4817 generic.go:334] "Generic (PLEG): container finished" podID="44e2523e-6f4b-475e-b733-a45e3744f774" containerID="c14121f1da6f32729acf0a3e3e09d8f91dee90c469b4127c3ee46d09a6664fd0" exitCode=0 Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.368632 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerDied","Data":"c14121f1da6f32729acf0a3e3e09d8f91dee90c469b4127c3ee46d09a6664fd0"} Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.371724 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bdefbf49f64daace7d6f39689e38166b8f63a9c952aaa3a823fc78ae33f466fb"} Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.731305 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.731366 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.731423 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:47 crc kubenswrapper[4817]: E0314 05:34:47.731769 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:47 crc kubenswrapper[4817]: I0314 05:34:47.731799 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:47 crc kubenswrapper[4817]: E0314 05:34:47.731971 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:47 crc kubenswrapper[4817]: E0314 05:34:47.732026 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:47 crc kubenswrapper[4817]: E0314 05:34:47.732102 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:48 crc kubenswrapper[4817]: I0314 05:34:48.379147 4817 generic.go:334] "Generic (PLEG): container finished" podID="44e2523e-6f4b-475e-b733-a45e3744f774" containerID="46581b426d4bc098285bd7921dde6a9bbd0f985f9a67a6ea07970a337a2c548b" exitCode=0 Mar 14 05:34:48 crc kubenswrapper[4817]: I0314 05:34:48.379212 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerDied","Data":"46581b426d4bc098285bd7921dde6a9bbd0f985f9a67a6ea07970a337a2c548b"} Mar 14 05:34:49 crc kubenswrapper[4817]: I0314 05:34:49.387485 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerStarted","Data":"fd1f8e2997adda5e3961bcf99895ec9b238ead6a9898552c8e3878453fd39532"} Mar 14 05:34:49 crc kubenswrapper[4817]: I0314 05:34:49.731197 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:49 crc kubenswrapper[4817]: E0314 05:34:49.731735 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:49 crc kubenswrapper[4817]: I0314 05:34:49.731197 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:49 crc kubenswrapper[4817]: E0314 05:34:49.732000 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:49 crc kubenswrapper[4817]: I0314 05:34:49.731352 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:49 crc kubenswrapper[4817]: I0314 05:34:49.731275 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:49 crc kubenswrapper[4817]: E0314 05:34:49.732194 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:49 crc kubenswrapper[4817]: E0314 05:34:49.732546 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:50 crc kubenswrapper[4817]: I0314 05:34:50.396486 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v2gnk" event={"ID":"7626bd0c-9420-4e61-98b0-00e4c9eb21f2","Type":"ContainerStarted","Data":"5338948c872519ada170debed3f7713eff307fe0be963750c786f54778774f54"} Mar 14 05:34:50 crc kubenswrapper[4817]: I0314 05:34:50.406928 4817 generic.go:334] "Generic (PLEG): container finished" podID="44e2523e-6f4b-475e-b733-a45e3744f774" containerID="fd1f8e2997adda5e3961bcf99895ec9b238ead6a9898552c8e3878453fd39532" exitCode=0 Mar 14 05:34:50 crc kubenswrapper[4817]: I0314 05:34:50.406975 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerDied","Data":"fd1f8e2997adda5e3961bcf99895ec9b238ead6a9898552c8e3878453fd39532"} Mar 14 05:34:50 crc kubenswrapper[4817]: I0314 05:34:50.456201 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-v2gnk" podStartSLOduration=80.456177151 podStartE2EDuration="1m20.456177151s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:50.426412649 +0000 UTC m=+144.464673395" watchObservedRunningTime="2026-03-14 05:34:50.456177151 +0000 UTC m=+144.494437887" Mar 14 05:34:50 crc kubenswrapper[4817]: I0314 05:34:50.748448 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.416241 4817 generic.go:334] "Generic (PLEG): container finished" podID="44e2523e-6f4b-475e-b733-a45e3744f774" containerID="999afa3ddcc3b1d1bc4718d3e9afd5ed33c1e952b5dc29d0a3fd2b33883eaa88" exitCode=0 Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.416372 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerDied","Data":"999afa3ddcc3b1d1bc4718d3e9afd5ed33c1e952b5dc29d0a3fd2b33883eaa88"} Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.468532 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.468497263 podStartE2EDuration="1.468497263s" podCreationTimestamp="2026-03-14 05:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:51.467985558 +0000 UTC m=+145.506246294" watchObservedRunningTime="2026-03-14 05:34:51.468497263 +0000 UTC m=+145.506758049" Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.731116 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:51 crc kubenswrapper[4817]: E0314 05:34:51.731255 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.731430 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:51 crc kubenswrapper[4817]: E0314 05:34:51.731499 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.731594 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:51 crc kubenswrapper[4817]: E0314 05:34:51.731665 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 05:34:51 crc kubenswrapper[4817]: I0314 05:34:51.731758 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:51 crc kubenswrapper[4817]: E0314 05:34:51.731827 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4lfsz" podUID="aae80926-3fb7-4be8-80a0-25c27ee13a03" Mar 14 05:34:52 crc kubenswrapper[4817]: I0314 05:34:52.424465 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" event={"ID":"44e2523e-6f4b-475e-b733-a45e3744f774","Type":"ContainerStarted","Data":"c9cf9dfab42f643324c8f827ae412904172fec781e8f6c2eccf5301289a21b89"} Mar 14 05:34:52 crc kubenswrapper[4817]: I0314 05:34:52.466050 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jlnmq" podStartSLOduration=82.466022637 podStartE2EDuration="1m22.466022637s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:34:52.464167343 +0000 UTC m=+146.502428169" watchObservedRunningTime="2026-03-14 05:34:52.466022637 +0000 UTC m=+146.504283423" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.041179 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:53 crc kubenswrapper[4817]: E0314 05:34:53.041359 4817 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:53 crc kubenswrapper[4817]: E0314 05:34:53.041452 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs podName:aae80926-3fb7-4be8-80a0-25c27ee13a03 nodeName:}" failed. No retries permitted until 2026-03-14 05:35:25.041429454 +0000 UTC m=+179.079690200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs") pod "network-metrics-daemon-4lfsz" (UID: "aae80926-3fb7-4be8-80a0-25c27ee13a03") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.731672 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.731724 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.731733 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.731986 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.735149 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.735564 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.735619 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.735773 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.736032 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 05:34:53 crc kubenswrapper[4817]: I0314 05:34:53.737059 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.073834 4817 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.124111 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86l7w"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.124972 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.126466 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7zj7n"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.129140 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.134261 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.134787 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.135160 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.135187 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.135567 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.137049 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.140673 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.140762 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.141212 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.140689 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.141779 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.143633 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.147172 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.166130 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.166716 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.167287 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.167962 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.170157 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-f772q"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.170538 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5mbx5"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.170879 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.171470 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rt8qf"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.172016 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.181526 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.184671 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gcsxl"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.185106 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.185470 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jnxpm"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.185807 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.191375 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.192115 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.194010 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.194275 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.194734 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.195063 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.195398 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.195614 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.195846 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.196095 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.196306 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.196848 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.197145 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.197601 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jx7hb"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.197918 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8hzr5"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.198456 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.200494 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.204632 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.205232 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z5ltz"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.205538 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wbttc"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.206028 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.207604 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.208383 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.208809 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.209695 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.209828 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.209860 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.210092 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.210207 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.210364 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.210836 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.211940 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557774-2vhw8"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212013 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212355 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212584 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212733 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212750 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212854 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.212936 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213198 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213277 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213535 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213576 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213735 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213759 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213863 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.213993 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.214017 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.214072 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.214116 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.214313 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.232503 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.232861 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233189 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233217 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233370 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233447 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233592 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233677 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233805 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.233940 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234071 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234174 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234191 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234390 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234768 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234852 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.235033 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.235031 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.235317 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.234774 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.235878 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.240048 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.240291 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lcsh2"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.240587 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.240696 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.240874 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.241083 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.241186 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.240816 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.241409 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.241468 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.241604 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.241650 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.242162 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.244869 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.245704 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.245990 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.246137 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.246246 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.245258 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.246313 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.246802 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4khbc"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.246958 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.247085 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.247221 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.247508 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.247661 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.247826 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.248024 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.248293 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.248488 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.245305 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.245493 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.250333 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.252616 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.253267 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.253487 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.253633 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.253766 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.254145 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.254369 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.255840 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.256106 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.257580 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.264566 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.265190 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.266476 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.293287 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-76kfl"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.293788 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294059 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294351 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-serving-cert\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294391 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c53feda-784d-431d-bbd6-528333b58935-etcd-client\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294414 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-policies\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294442 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294466 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-etcd-ca\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294488 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0eae806d-94d0-4c92-a008-3853f85933a9-machine-approver-tls\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294509 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294712 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294858 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.294509 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295012 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-trusted-ca-bundle\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295030 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295051 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf21823d-0caa-409b-8cf7-47de479e404d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295068 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hqnb\" (UniqueName: \"kubernetes.io/projected/61083608-8b3b-4c98-a236-0d3f1d26d3b5-kube-api-access-6hqnb\") pod \"dns-operator-744455d44c-8hzr5\" (UID: \"61083608-8b3b-4c98-a236-0d3f1d26d3b5\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295084 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-service-ca-bundle\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295099 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-config\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295128 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-images\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295146 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295168 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bda382-2788-43b8-b149-9e8319aaa2c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295190 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295206 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmb6t\" (UniqueName: \"kubernetes.io/projected/bf21823d-0caa-409b-8cf7-47de479e404d-kube-api-access-tmb6t\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295224 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-trusted-ca\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295239 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvq4\" (UniqueName: \"kubernetes.io/projected/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-kube-api-access-6pvq4\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295255 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-config\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295271 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61083608-8b3b-4c98-a236-0d3f1d26d3b5-metrics-tls\") pod \"dns-operator-744455d44c-8hzr5\" (UID: \"61083608-8b3b-4c98-a236-0d3f1d26d3b5\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295289 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwbl\" (UniqueName: \"kubernetes.io/projected/9c53feda-784d-431d-bbd6-528333b58935-kube-api-access-ntwbl\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295316 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlx87\" (UniqueName: \"kubernetes.io/projected/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-kube-api-access-zlx87\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295332 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-config\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295355 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llp9h\" (UniqueName: \"kubernetes.io/projected/4b50e6b0-0310-4c00-9fe4-b7d987811711-kube-api-access-llp9h\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295371 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b916de-657d-4285-80a7-e22fe89dd6f8-serving-cert\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295386 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-audit-dir\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295430 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-config\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295446 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-client-ca\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295460 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295483 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295501 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xd68\" (UniqueName: \"kubernetes.io/projected/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-kube-api-access-5xd68\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295517 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ttj7\" (UniqueName: \"kubernetes.io/projected/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-kube-api-access-7ttj7\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295532 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p89p8\" (UniqueName: \"kubernetes.io/projected/2e334958-5906-4f73-b6ef-c4634ae491b0-kube-api-access-p89p8\") pod \"cluster-samples-operator-665b6dd947-v5nhk\" (UID: \"2e334958-5906-4f73-b6ef-c4634ae491b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295547 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-encryption-config\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295560 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295582 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf21823d-0caa-409b-8cf7-47de479e404d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295599 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-dir\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295620 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e334958-5906-4f73-b6ef-c4634ae491b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-v5nhk\" (UID: \"2e334958-5906-4f73-b6ef-c4634ae491b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295645 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b50e6b0-0310-4c00-9fe4-b7d987811711-audit-dir\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295661 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-etcd-client\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295683 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtvsd\" (UniqueName: \"kubernetes.io/projected/3d625077-3908-4a2f-a87e-5631a1a2a450-kube-api-access-mtvsd\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295706 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-oauth-serving-cert\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295727 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c53feda-784d-431d-bbd6-528333b58935-serving-cert\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295742 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-node-pullsecrets\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295757 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcsn6\" (UniqueName: \"kubernetes.io/projected/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-kube-api-access-bcsn6\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295782 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-config\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295802 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-audit\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295818 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-encryption-config\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295837 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-serving-cert\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295852 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-etcd-service-ca\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295866 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-etcd-serving-ca\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.295881 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.298076 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.299765 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.301389 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.303470 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.303769 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.306829 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.306975 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.307615 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.307781 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5cqlk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309044 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-serving-cert\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309191 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae806d-94d0-4c92-a008-3853f85933a9-config\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309225 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309253 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-client-ca\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309295 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjql\" (UniqueName: \"kubernetes.io/projected/ac384b90-5e6b-4477-b71a-8a8a56a29896-kube-api-access-lkjql\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309326 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-serving-cert\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309352 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-config\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309375 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309396 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-etcd-client\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309417 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309439 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7d2d\" (UniqueName: \"kubernetes.io/projected/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-kube-api-access-n7d2d\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309467 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d625077-3908-4a2f-a87e-5631a1a2a450-serving-cert\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309501 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bda382-2788-43b8-b149-9e8319aaa2c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309533 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-service-ca\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309558 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0eae806d-94d0-4c92-a008-3853f85933a9-auth-proxy-config\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309583 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-config\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309612 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309643 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309676 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-serving-cert\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309709 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-oauth-config\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309743 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309769 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x747\" (UniqueName: \"kubernetes.io/projected/3b721585-56d3-4382-b93d-c70296e6d223-kube-api-access-7x747\") pod \"downloads-7954f5f757-gcsxl\" (UID: \"3b721585-56d3-4382-b93d-c70296e6d223\") " pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309792 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-serving-cert\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309817 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-config\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309839 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309861 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkqn\" (UniqueName: \"kubernetes.io/projected/c6b916de-657d-4285-80a7-e22fe89dd6f8-kube-api-access-4wkqn\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309883 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztk8n\" (UniqueName: \"kubernetes.io/projected/83bda382-2788-43b8-b149-9e8319aaa2c9-kube-api-access-ztk8n\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309935 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-audit-policies\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309955 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.309980 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-image-import-ca\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.310004 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg76s\" (UniqueName: \"kubernetes.io/projected/0eae806d-94d0-4c92-a008-3853f85933a9-kube-api-access-bg76s\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.310028 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.310596 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.310782 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.311268 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.311376 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.315121 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.315366 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.316310 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.316754 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.318530 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.319148 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.319734 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.320233 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.320695 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.322610 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.322877 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.323396 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.325061 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.326210 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.326788 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bq8dk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.326907 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.326988 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.327260 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.327887 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86l7w"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.327960 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.328426 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7zj7n"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.329841 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-twv27"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.330611 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.330931 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pk9ws"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.331789 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.331878 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9bjwk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.332714 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.333125 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.334357 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gcsxl"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.334984 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jnxpm"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.336550 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.337408 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jx7hb"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.340084 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-f772q"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.341947 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.347166 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.347355 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.357490 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.362154 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.363864 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4khbc"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.365461 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.367149 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557774-2vhw8"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.367484 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.367854 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.369609 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wbttc"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.370918 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.374569 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8hzr5"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.376766 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.378447 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.380524 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z5ltz"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.382324 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rt8qf"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.383788 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zc2cb"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.385126 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6mx7t"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.385288 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.386544 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.387068 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.387498 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.388326 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.389630 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.390680 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5cqlk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.391973 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lcsh2"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.394306 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5mbx5"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.397665 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.399940 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.401975 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.404499 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bq8dk"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.407053 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.407251 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.409306 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zc2cb"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.410674 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411240 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-audit-policies\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411270 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkqn\" (UniqueName: \"kubernetes.io/projected/c6b916de-657d-4285-80a7-e22fe89dd6f8-kube-api-access-4wkqn\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411384 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztk8n\" (UniqueName: \"kubernetes.io/projected/83bda382-2788-43b8-b149-9e8319aaa2c9-kube-api-access-ztk8n\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411429 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79ef409d-9eab-4e6a-8009-4f06ca780969-metrics-tls\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411470 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411488 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411505 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-image-import-ca\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411521 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg76s\" (UniqueName: \"kubernetes.io/projected/0eae806d-94d0-4c92-a008-3853f85933a9-kube-api-access-bg76s\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411540 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411556 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-serving-cert\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411570 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c53feda-784d-431d-bbd6-528333b58935-etcd-client\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411641 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-policies\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411675 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411691 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-etcd-ca\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411706 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0eae806d-94d0-4c92-a008-3853f85933a9-machine-approver-tls\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411726 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-trusted-ca-bundle\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411747 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411789 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf21823d-0caa-409b-8cf7-47de479e404d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411808 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hqnb\" (UniqueName: \"kubernetes.io/projected/61083608-8b3b-4c98-a236-0d3f1d26d3b5-kube-api-access-6hqnb\") pod \"dns-operator-744455d44c-8hzr5\" (UID: \"61083608-8b3b-4c98-a236-0d3f1d26d3b5\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411834 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-service-ca-bundle\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411850 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-config\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411869 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-srv-cert\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411887 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-images\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411922 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411940 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bda382-2788-43b8-b149-9e8319aaa2c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411959 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411976 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmb6t\" (UniqueName: \"kubernetes.io/projected/bf21823d-0caa-409b-8cf7-47de479e404d-kube-api-access-tmb6t\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.411995 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-config\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412012 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-trusted-ca\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412029 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvq4\" (UniqueName: \"kubernetes.io/projected/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-kube-api-access-6pvq4\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412045 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61083608-8b3b-4c98-a236-0d3f1d26d3b5-metrics-tls\") pod \"dns-operator-744455d44c-8hzr5\" (UID: \"61083608-8b3b-4c98-a236-0d3f1d26d3b5\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412063 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-config\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412087 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntwbl\" (UniqueName: \"kubernetes.io/projected/9c53feda-784d-431d-bbd6-528333b58935-kube-api-access-ntwbl\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412104 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlx87\" (UniqueName: \"kubernetes.io/projected/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-kube-api-access-zlx87\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412120 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjts4\" (UniqueName: \"kubernetes.io/projected/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-kube-api-access-cjts4\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412145 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llp9h\" (UniqueName: \"kubernetes.io/projected/4b50e6b0-0310-4c00-9fe4-b7d987811711-kube-api-access-llp9h\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412161 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b916de-657d-4285-80a7-e22fe89dd6f8-serving-cert\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412178 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-audit-dir\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412194 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e1f15d1-2628-4fa2-b571-b39051f128d2-apiservice-cert\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412212 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-client-ca\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412234 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412252 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-config\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412267 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412290 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412307 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p89p8\" (UniqueName: \"kubernetes.io/projected/2e334958-5906-4f73-b6ef-c4634ae491b0-kube-api-access-p89p8\") pod \"cluster-samples-operator-665b6dd947-v5nhk\" (UID: \"2e334958-5906-4f73-b6ef-c4634ae491b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412324 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xd68\" (UniqueName: \"kubernetes.io/projected/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-kube-api-access-5xd68\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412340 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ttj7\" (UniqueName: \"kubernetes.io/projected/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-kube-api-access-7ttj7\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412356 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-encryption-config\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412371 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412386 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79ef409d-9eab-4e6a-8009-4f06ca780969-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412402 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-dir\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412418 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e334958-5906-4f73-b6ef-c4634ae491b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-v5nhk\" (UID: \"2e334958-5906-4f73-b6ef-c4634ae491b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412502 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf21823d-0caa-409b-8cf7-47de479e404d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412530 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e1f15d1-2628-4fa2-b571-b39051f128d2-webhook-cert\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412548 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b50e6b0-0310-4c00-9fe4-b7d987811711-audit-dir\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412563 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-etcd-client\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412583 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtvsd\" (UniqueName: \"kubernetes.io/projected/3d625077-3908-4a2f-a87e-5631a1a2a450-kube-api-access-mtvsd\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412598 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-node-pullsecrets\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412614 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcsn6\" (UniqueName: \"kubernetes.io/projected/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-kube-api-access-bcsn6\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412630 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-oauth-serving-cert\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412646 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c53feda-784d-431d-bbd6-528333b58935-serving-cert\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412663 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-audit\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412678 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-encryption-config\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412694 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-config\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412709 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-etcd-serving-ca\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412725 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412741 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-serving-cert\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412755 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-etcd-service-ca\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412770 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2e1f15d1-2628-4fa2-b571-b39051f128d2-tmpfs\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412788 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-serving-cert\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412794 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-image-import-ca\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412804 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkjql\" (UniqueName: \"kubernetes.io/projected/ac384b90-5e6b-4477-b71a-8a8a56a29896-kube-api-access-lkjql\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412864 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae806d-94d0-4c92-a008-3853f85933a9-config\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412906 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412930 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-client-ca\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412955 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxmz\" (UniqueName: \"kubernetes.io/projected/2e1f15d1-2628-4fa2-b571-b39051f128d2-kube-api-access-msxmz\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412978 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.412995 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-serving-cert\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413013 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-config\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413031 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79ef409d-9eab-4e6a-8009-4f06ca780969-trusted-ca\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413054 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-etcd-client\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413071 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413101 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7d2d\" (UniqueName: \"kubernetes.io/projected/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-kube-api-access-n7d2d\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413118 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d625077-3908-4a2f-a87e-5631a1a2a450-serving-cert\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413134 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bda382-2788-43b8-b149-9e8319aaa2c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413150 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413180 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hns\" (UniqueName: \"kubernetes.io/projected/79ef409d-9eab-4e6a-8009-4f06ca780969-kube-api-access-98hns\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413206 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-service-ca\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413224 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0eae806d-94d0-4c92-a008-3853f85933a9-auth-proxy-config\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413240 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-config\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413259 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413276 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-serving-cert\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413295 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-oauth-config\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413309 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-config\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413338 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413354 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413370 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x747\" (UniqueName: \"kubernetes.io/projected/3b721585-56d3-4382-b93d-c70296e6d223-kube-api-access-7x747\") pod \"downloads-7954f5f757-gcsxl\" (UID: \"3b721585-56d3-4382-b93d-c70296e6d223\") " pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413387 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-serving-cert\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413396 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.413473 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-config\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.414473 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-trusted-ca\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.414813 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-dir\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.415574 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-config\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.415588 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-policies\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.416348 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-images\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.416422 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae806d-94d0-4c92-a008-3853f85933a9-config\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.416812 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-service-ca-bundle\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.416894 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.417206 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-config\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.417807 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.418659 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.418721 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bda382-2788-43b8-b149-9e8319aaa2c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.418744 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-client-ca\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.420039 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.420306 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-audit-dir\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.420887 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-config\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.420924 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d625077-3908-4a2f-a87e-5631a1a2a450-serving-cert\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.421055 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-client-ca\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.421248 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-serving-cert\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.421290 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf21823d-0caa-409b-8cf7-47de479e404d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.421546 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.423223 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-config\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.423223 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.423487 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-trusted-ca-bundle\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.423842 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-node-pullsecrets\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.424178 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b916de-657d-4285-80a7-e22fe89dd6f8-serving-cert\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.425036 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.425157 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.425591 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-oauth-serving-cert\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.425671 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.425929 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-config\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.426049 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.426133 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b50e6b0-0310-4c00-9fe4-b7d987811711-audit-dir\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.426701 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-config\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.427280 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-service-ca\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.427325 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0eae806d-94d0-4c92-a008-3853f85933a9-auth-proxy-config\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.427798 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.428235 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9c53feda-784d-431d-bbd6-528333b58935-etcd-client\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.428618 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.428706 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-config\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.428927 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b916de-657d-4285-80a7-e22fe89dd6f8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.428564 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0eae806d-94d0-4c92-a008-3853f85933a9-machine-approver-tls\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.429184 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b50e6b0-0310-4c00-9fe4-b7d987811711-audit-policies\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.429348 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-audit\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.429462 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-etcd-serving-ca\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.430049 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bda382-2788-43b8-b149-9e8319aaa2c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.430164 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.430608 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.430633 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-encryption-config\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.431206 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf21823d-0caa-409b-8cf7-47de479e404d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.431527 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.431654 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-serving-cert\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.431748 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.432762 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-serving-cert\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.433070 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-oauth-config\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.433086 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-serving-cert\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.433280 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c53feda-784d-431d-bbd6-528333b58935-serving-cert\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.433415 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.433629 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61083608-8b3b-4c98-a236-0d3f1d26d3b5-metrics-tls\") pod \"dns-operator-744455d44c-8hzr5\" (UID: \"61083608-8b3b-4c98-a236-0d3f1d26d3b5\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.433750 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-etcd-client\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.434103 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.434372 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e334958-5906-4f73-b6ef-c4634ae491b0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-v5nhk\" (UID: \"2e334958-5906-4f73-b6ef-c4634ae491b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.434417 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-serving-cert\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.435284 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-serving-cert\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.435904 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b50e6b0-0310-4c00-9fe4-b7d987811711-etcd-client\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.439736 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-twv27"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.440630 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6mx7t"] Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.440703 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-encryption-config\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.449459 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.457069 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-etcd-service-ca\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.467748 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.469206 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9c53feda-784d-431d-bbd6-528333b58935-etcd-ca\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.488243 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.507110 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514022 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79ef409d-9eab-4e6a-8009-4f06ca780969-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514076 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e1f15d1-2628-4fa2-b571-b39051f128d2-webhook-cert\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514114 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2e1f15d1-2628-4fa2-b571-b39051f128d2-tmpfs\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514138 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxmz\" (UniqueName: \"kubernetes.io/projected/2e1f15d1-2628-4fa2-b571-b39051f128d2-kube-api-access-msxmz\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514155 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79ef409d-9eab-4e6a-8009-4f06ca780969-trusted-ca\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514182 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hns\" (UniqueName: \"kubernetes.io/projected/79ef409d-9eab-4e6a-8009-4f06ca780969-kube-api-access-98hns\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514215 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79ef409d-9eab-4e6a-8009-4f06ca780969-metrics-tls\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514253 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-srv-cert\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514308 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e1f15d1-2628-4fa2-b571-b39051f128d2-apiservice-cert\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514322 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjts4\" (UniqueName: \"kubernetes.io/projected/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-kube-api-access-cjts4\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514337 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.514661 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2e1f15d1-2628-4fa2-b571-b39051f128d2-tmpfs\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.516880 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.530192 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.546719 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.567401 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.588159 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.607291 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.633525 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.646510 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.667778 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.686976 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.698118 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79ef409d-9eab-4e6a-8009-4f06ca780969-metrics-tls\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.713064 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.716282 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79ef409d-9eab-4e6a-8009-4f06ca780969-trusted-ca\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.727666 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.747851 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.787793 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.808107 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.818134 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e1f15d1-2628-4fa2-b571-b39051f128d2-webhook-cert\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.818486 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e1f15d1-2628-4fa2-b571-b39051f128d2-apiservice-cert\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.827685 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.848343 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.867558 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.887582 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.907077 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.928466 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.950709 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.967632 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 05:34:56 crc kubenswrapper[4817]: I0314 05:34:56.986938 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.006987 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.027675 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.047578 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.067806 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.087059 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.107149 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.119997 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-srv-cert\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.127778 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.148355 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.168061 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.187148 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.208452 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.228203 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.247734 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.294975 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.307464 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.326178 4817 request.go:700] Waited for 1.014559247s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.328024 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.347054 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.367464 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.387097 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.407862 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.434042 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.447071 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.467019 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.487085 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.506631 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.527613 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.546502 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.566502 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.587608 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.606793 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.627100 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.647290 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.667393 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.688407 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.708251 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.727600 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.747938 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.768725 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.788314 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.808043 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.827460 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.848345 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.868681 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.886857 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.907044 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.927449 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.946785 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.967634 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 05:34:57 crc kubenswrapper[4817]: I0314 05:34:57.986239 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.007655 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.027379 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.047324 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.068212 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.087768 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.107464 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.127313 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.147367 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.168099 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.187997 4817 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.207587 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.227614 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.275401 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkqn\" (UniqueName: \"kubernetes.io/projected/c6b916de-657d-4285-80a7-e22fe89dd6f8-kube-api-access-4wkqn\") pod \"authentication-operator-69f744f599-f772q\" (UID: \"c6b916de-657d-4285-80a7-e22fe89dd6f8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.300586 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztk8n\" (UniqueName: \"kubernetes.io/projected/83bda382-2788-43b8-b149-9e8319aaa2c9-kube-api-access-ztk8n\") pod \"openshift-controller-manager-operator-756b6f6bc6-4r6qx\" (UID: \"83bda382-2788-43b8-b149-9e8319aaa2c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.311000 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmb6t\" (UniqueName: \"kubernetes.io/projected/bf21823d-0caa-409b-8cf7-47de479e404d-kube-api-access-tmb6t\") pod \"openshift-apiserver-operator-796bbdcf4f-r82p7\" (UID: \"bf21823d-0caa-409b-8cf7-47de479e404d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.327603 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg76s\" (UniqueName: \"kubernetes.io/projected/0eae806d-94d0-4c92-a008-3853f85933a9-kube-api-access-bg76s\") pod \"machine-approver-56656f9798-5rmwm\" (UID: \"0eae806d-94d0-4c92-a008-3853f85933a9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.345402 4817 request.go:700] Waited for 1.930838057s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.346325 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkjql\" (UniqueName: \"kubernetes.io/projected/ac384b90-5e6b-4477-b71a-8a8a56a29896-kube-api-access-lkjql\") pod \"console-f9d7485db-jnxpm\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.364351 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvq4\" (UniqueName: \"kubernetes.io/projected/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-kube-api-access-6pvq4\") pod \"route-controller-manager-6576b87f9c-jsk5j\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.380864 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hqnb\" (UniqueName: \"kubernetes.io/projected/61083608-8b3b-4c98-a236-0d3f1d26d3b5-kube-api-access-6hqnb\") pod \"dns-operator-744455d44c-8hzr5\" (UID: \"61083608-8b3b-4c98-a236-0d3f1d26d3b5\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.402477 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntwbl\" (UniqueName: \"kubernetes.io/projected/9c53feda-784d-431d-bbd6-528333b58935-kube-api-access-ntwbl\") pod \"etcd-operator-b45778765-z5ltz\" (UID: \"9c53feda-784d-431d-bbd6-528333b58935\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.408357 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.421841 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.433137 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlx87\" (UniqueName: \"kubernetes.io/projected/9f2c60cc-433e-4f50-8eca-2fb0ddd7982d-kube-api-access-zlx87\") pod \"machine-api-operator-5694c8668f-7zj7n\" (UID: \"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.445685 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.451320 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llp9h\" (UniqueName: \"kubernetes.io/projected/4b50e6b0-0310-4c00-9fe4-b7d987811711-kube-api-access-llp9h\") pod \"apiserver-7bbb656c7d-rpr5b\" (UID: \"4b50e6b0-0310-4c00-9fe4-b7d987811711\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.459575 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.465478 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7d2d\" (UniqueName: \"kubernetes.io/projected/4fc456eb-fbb1-42c7-88fa-f7a3e13a397a-kube-api-access-n7d2d\") pod \"apiserver-76f77b778f-rt8qf\" (UID: \"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a\") " pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.476769 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.484842 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p89p8\" (UniqueName: \"kubernetes.io/projected/2e334958-5906-4f73-b6ef-c4634ae491b0-kube-api-access-p89p8\") pod \"cluster-samples-operator-665b6dd947-v5nhk\" (UID: \"2e334958-5906-4f73-b6ef-c4634ae491b0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.484957 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.490646 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.504923 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.519870 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xd68\" (UniqueName: \"kubernetes.io/projected/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-kube-api-access-5xd68\") pod \"oauth-openshift-558db77b4-5mbx5\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.524817 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x747\" (UniqueName: \"kubernetes.io/projected/3b721585-56d3-4382-b93d-c70296e6d223-kube-api-access-7x747\") pod \"downloads-7954f5f757-gcsxl\" (UID: \"3b721585-56d3-4382-b93d-c70296e6d223\") " pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.548493 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ttj7\" (UniqueName: \"kubernetes.io/projected/ee782dd8-7162-4e94-a2c6-5af0c4596ecf-kube-api-access-7ttj7\") pod \"console-operator-58897d9998-jx7hb\" (UID: \"ee782dd8-7162-4e94-a2c6-5af0c4596ecf\") " pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.566999 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtvsd\" (UniqueName: \"kubernetes.io/projected/3d625077-3908-4a2f-a87e-5631a1a2a450-kube-api-access-mtvsd\") pod \"controller-manager-879f6c89f-86l7w\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.579230 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.583141 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcsn6\" (UniqueName: \"kubernetes.io/projected/4da22dbe-ae9a-4df9-a16c-76b248ef22b4-kube-api-access-bcsn6\") pod \"openshift-config-operator-7777fb866f-wbttc\" (UID: \"4da22dbe-ae9a-4df9-a16c-76b248ef22b4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.599131 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.609145 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.610190 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxmz\" (UniqueName: \"kubernetes.io/projected/2e1f15d1-2628-4fa2-b571-b39051f128d2-kube-api-access-msxmz\") pod \"packageserver-d55dfcdfc-lg8lf\" (UID: \"2e1f15d1-2628-4fa2-b571-b39051f128d2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.613746 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.629586 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79ef409d-9eab-4e6a-8009-4f06ca780969-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.640582 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.646447 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx"] Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.648910 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.650943 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hns\" (UniqueName: \"kubernetes.io/projected/79ef409d-9eab-4e6a-8009-4f06ca780969-kube-api-access-98hns\") pod \"ingress-operator-5b745b69d9-xnhq6\" (UID: \"79ef409d-9eab-4e6a-8009-4f06ca780969\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.673371 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j"] Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.673528 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjts4\" (UniqueName: \"kubernetes.io/projected/a7458d73-4b83-46dc-9be4-6c14e1b9bcd7-kube-api-access-cjts4\") pod \"olm-operator-6b444d44fb-f6b68\" (UID: \"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:58 crc kubenswrapper[4817]: W0314 05:34:58.684244 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83bda382_2788_43b8_b149_9e8319aaa2c9.slice/crio-0bede0942f85a71db4682d08d2aad596798ac635bfb96d1db0c4433be694be1b WatchSource:0}: Error finding container 0bede0942f85a71db4682d08d2aad596798ac635bfb96d1db0c4433be694be1b: Status 404 returned error can't find the container with id 0bede0942f85a71db4682d08d2aad596798ac635bfb96d1db0c4433be694be1b Mar 14 05:34:58 crc kubenswrapper[4817]: W0314 05:34:58.695044 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a068b8_6cf5_4f60_9a4b_89c40b37ad32.slice/crio-72bd7e80f1abd14e784322313489678227d4087b615c81c824418703e4aa3f59 WatchSource:0}: Error finding container 72bd7e80f1abd14e784322313489678227d4087b615c81c824418703e4aa3f59: Status 404 returned error can't find the container with id 72bd7e80f1abd14e784322313489678227d4087b615c81c824418703e4aa3f59 Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.699209 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.740569 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749667 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg7pz\" (UniqueName: \"kubernetes.io/projected/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-kube-api-access-vg7pz\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749705 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfs6j\" (UniqueName: \"kubernetes.io/projected/a3740912-6b07-47f2-80f2-7f62e67cbef2-kube-api-access-bfs6j\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749730 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3740912-6b07-47f2-80f2-7f62e67cbef2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749759 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-tls\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749817 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7p56\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-kube-api-access-b7p56\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749835 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/536efb61-e25f-4be4-88b8-e2a7f7e0df84-proxy-tls\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749862 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-stats-auth\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749877 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbd22\" (UniqueName: \"kubernetes.io/projected/5624b850-5ca9-47d2-82e9-52bbc3829bc5-kube-api-access-xbd22\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749909 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7767bcc0-c4e6-4e9e-ba1f-5286b7263f44-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k2dxk\" (UID: \"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749925 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749951 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-config\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.749983 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjhd\" (UniqueName: \"kubernetes.io/projected/cdb37610-64a2-43d9-98b1-513a60b6de4d-kube-api-access-8cjhd\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750038 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-metrics-certs\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750065 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-default-certificate\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750090 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb37610-64a2-43d9-98b1-513a60b6de4d-config-volume\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750131 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1b60a8f-12a6-4129-9b96-2b69e788111b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750146 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-certificates\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750166 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-service-ca-bundle\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750184 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-bound-sa-token\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750200 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/536efb61-e25f-4be4-88b8-e2a7f7e0df84-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750216 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3740912-6b07-47f2-80f2-7f62e67cbef2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750237 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1555cef9-f674-40c5-8edf-e0a02cda8d4b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750254 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1555cef9-f674-40c5-8edf-e0a02cda8d4b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750275 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb37610-64a2-43d9-98b1-513a60b6de4d-secret-volume\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750306 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750354 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8557h\" (UniqueName: \"kubernetes.io/projected/7767bcc0-c4e6-4e9e-ba1f-5286b7263f44-kube-api-access-8557h\") pod \"package-server-manager-789f6589d5-k2dxk\" (UID: \"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750373 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3740912-6b07-47f2-80f2-7f62e67cbef2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750398 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750438 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-signing-cabundle\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750461 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750528 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndlft\" (UniqueName: \"kubernetes.io/projected/536efb61-e25f-4be4-88b8-e2a7f7e0df84-kube-api-access-ndlft\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750547 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rnd\" (UniqueName: \"kubernetes.io/projected/b35f8ad5-461a-4c6c-aba1-56b3358990f8-kube-api-access-56rnd\") pod \"auto-csr-approver-29557774-2vhw8\" (UID: \"b35f8ad5-461a-4c6c-aba1-56b3358990f8\") " pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750605 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1b60a8f-12a6-4129-9b96-2b69e788111b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750644 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750697 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1555cef9-f674-40c5-8edf-e0a02cda8d4b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750743 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqmdm\" (UniqueName: \"kubernetes.io/projected/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-kube-api-access-tqmdm\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750785 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-signing-key\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.750819 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-trusted-ca\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: E0314 05:34:58.754460 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.254447854 +0000 UTC m=+153.292708600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.767769 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.801158 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851389 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851646 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjhd\" (UniqueName: \"kubernetes.io/projected/cdb37610-64a2-43d9-98b1-513a60b6de4d-kube-api-access-8cjhd\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851673 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-metrics-certs\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851697 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbcwq\" (UniqueName: \"kubernetes.io/projected/71e0f963-52a3-45c7-a104-bc2a081c6e8e-kube-api-access-wbcwq\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851714 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-default-certificate\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851735 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnjf7\" (UniqueName: \"kubernetes.io/projected/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-kube-api-access-vnjf7\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851766 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-config-volume\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851795 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71e0f963-52a3-45c7-a104-bc2a081c6e8e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851837 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlq9\" (UniqueName: \"kubernetes.io/projected/77e31af0-1176-4696-8d10-2a8425e75077-kube-api-access-pdlq9\") pod \"ingress-canary-twv27\" (UID: \"77e31af0-1176-4696-8d10-2a8425e75077\") " pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851863 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-node-bootstrap-token\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851880 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d848b2e-47e2-431b-aff4-db85a17027ac-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851915 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb37610-64a2-43d9-98b1-513a60b6de4d-config-volume\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851932 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-mountpoint-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851952 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1b60a8f-12a6-4129-9b96-2b69e788111b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851979 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-certificates\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.851994 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-socket-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852036 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-service-ca-bundle\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852065 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-bound-sa-token\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852080 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/536efb61-e25f-4be4-88b8-e2a7f7e0df84-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852098 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3740912-6b07-47f2-80f2-7f62e67cbef2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852124 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1555cef9-f674-40c5-8edf-e0a02cda8d4b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852138 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1555cef9-f674-40c5-8edf-e0a02cda8d4b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852158 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-certs\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852172 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4wj\" (UniqueName: \"kubernetes.io/projected/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-kube-api-access-ww4wj\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852195 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb37610-64a2-43d9-98b1-513a60b6de4d-secret-volume\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852212 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1047b1-fd57-45c9-b262-9f087572e514-config\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852236 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852257 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e31af0-1176-4696-8d10-2a8425e75077-cert\") pod \"ingress-canary-twv27\" (UID: \"77e31af0-1176-4696-8d10-2a8425e75077\") " pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852286 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8557h\" (UniqueName: \"kubernetes.io/projected/7767bcc0-c4e6-4e9e-ba1f-5286b7263f44-kube-api-access-8557h\") pod \"package-server-manager-789f6589d5-k2dxk\" (UID: \"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852300 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3740912-6b07-47f2-80f2-7f62e67cbef2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852315 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852332 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/71e0f963-52a3-45c7-a104-bc2a081c6e8e-ready\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852349 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g82cc\" (UniqueName: \"kubernetes.io/projected/564dba5a-f688-4ee3-9f4c-799539db7890-kube-api-access-g82cc\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852366 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lxp\" (UniqueName: \"kubernetes.io/projected/db1d9ab3-f304-42b2-9535-e41c50ce108c-kube-api-access-j7lxp\") pod \"multus-admission-controller-857f4d67dd-bq8dk\" (UID: \"db1d9ab3-f304-42b2-9535-e41c50ce108c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852394 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz2mp\" (UniqueName: \"kubernetes.io/projected/e0858db7-f7bb-4ef4-a46e-09dae35d6030-kube-api-access-hz2mp\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852420 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-signing-cabundle\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852445 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852470 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndlft\" (UniqueName: \"kubernetes.io/projected/536efb61-e25f-4be4-88b8-e2a7f7e0df84-kube-api-access-ndlft\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852485 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d848b2e-47e2-431b-aff4-db85a17027ac-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852518 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56rnd\" (UniqueName: \"kubernetes.io/projected/b35f8ad5-461a-4c6c-aba1-56b3358990f8-kube-api-access-56rnd\") pod \"auto-csr-approver-29557774-2vhw8\" (UID: \"b35f8ad5-461a-4c6c-aba1-56b3358990f8\") " pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852534 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0858db7-f7bb-4ef4-a46e-09dae35d6030-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852578 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf65941-df6f-4ba2-970a-b1346853d39b-proxy-tls\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852595 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-plugins-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852629 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1b60a8f-12a6-4129-9b96-2b69e788111b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852663 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nh5\" (UniqueName: \"kubernetes.io/projected/41cbacfd-0181-45a3-86c2-8a51f672a37d-kube-api-access-d4nh5\") pod \"migrator-59844c95c7-ssxwb\" (UID: \"41cbacfd-0181-45a3-86c2-8a51f672a37d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852680 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71e0f963-52a3-45c7-a104-bc2a081c6e8e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852695 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fbf65941-df6f-4ba2-970a-b1346853d39b-images\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852724 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1555cef9-f674-40c5-8edf-e0a02cda8d4b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852739 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa1047b1-fd57-45c9-b262-9f087572e514-serving-cert\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852757 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqmdm\" (UniqueName: \"kubernetes.io/projected/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-kube-api-access-tqmdm\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852774 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbf65941-df6f-4ba2-970a-b1346853d39b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852790 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/564dba5a-f688-4ee3-9f4c-799539db7890-profile-collector-cert\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852814 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-signing-key\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852830 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5n4\" (UniqueName: \"kubernetes.io/projected/fbf65941-df6f-4ba2-970a-b1346853d39b-kube-api-access-pw5n4\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852879 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqvr\" (UniqueName: \"kubernetes.io/projected/1661ef8f-2020-4c65-8228-434786300314-kube-api-access-ftqvr\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852952 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-trusted-ca\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.852987 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d848b2e-47e2-431b-aff4-db85a17027ac-config\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853032 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0858db7-f7bb-4ef4-a46e-09dae35d6030-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: E0314 05:34:58.853081 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.35305269 +0000 UTC m=+153.391313436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853160 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8sc\" (UniqueName: \"kubernetes.io/projected/5f8f4a28-ea66-4a2a-8cc8-ad845efd3266-kube-api-access-9t8sc\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgpqw\" (UID: \"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853218 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg7pz\" (UniqueName: \"kubernetes.io/projected/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-kube-api-access-vg7pz\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853291 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfs6j\" (UniqueName: \"kubernetes.io/projected/a3740912-6b07-47f2-80f2-7f62e67cbef2-kube-api-access-bfs6j\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853357 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3740912-6b07-47f2-80f2-7f62e67cbef2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853384 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db1d9ab3-f304-42b2-9535-e41c50ce108c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bq8dk\" (UID: \"db1d9ab3-f304-42b2-9535-e41c50ce108c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853425 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-tls\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853447 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/564dba5a-f688-4ee3-9f4c-799539db7890-srv-cert\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853524 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhfw\" (UniqueName: \"kubernetes.io/projected/aa1047b1-fd57-45c9-b262-9f087572e514-kube-api-access-9fhfw\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853545 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-csi-data-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853604 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7p56\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-kube-api-access-b7p56\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853624 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/536efb61-e25f-4be4-88b8-e2a7f7e0df84-proxy-tls\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853646 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f8f4a28-ea66-4a2a-8cc8-ad845efd3266-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgpqw\" (UID: \"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853708 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-stats-auth\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853729 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbd22\" (UniqueName: \"kubernetes.io/projected/5624b850-5ca9-47d2-82e9-52bbc3829bc5-kube-api-access-xbd22\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853762 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7767bcc0-c4e6-4e9e-ba1f-5286b7263f44-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k2dxk\" (UID: \"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853781 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853816 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-config\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853837 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-registration-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.853860 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-metrics-tls\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.856882 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.861458 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb37610-64a2-43d9-98b1-513a60b6de4d-config-volume\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.861728 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1b60a8f-12a6-4129-9b96-2b69e788111b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.863429 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-default-certificate\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.867413 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.868436 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-signing-cabundle\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.870859 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-service-ca-bundle\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.872235 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/536efb61-e25f-4be4-88b8-e2a7f7e0df84-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.874469 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/536efb61-e25f-4be4-88b8-e2a7f7e0df84-proxy-tls\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.874492 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-metrics-certs\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.883255 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.888101 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-stats-auth\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.893832 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb37610-64a2-43d9-98b1-513a60b6de4d-secret-volume\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.893974 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7767bcc0-c4e6-4e9e-ba1f-5286b7263f44-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k2dxk\" (UID: \"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.894289 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-signing-key\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.898719 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjhd\" (UniqueName: \"kubernetes.io/projected/cdb37610-64a2-43d9-98b1-513a60b6de4d-kube-api-access-8cjhd\") pod \"collect-profiles-29557770-ng6bz\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.905529 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg7pz\" (UniqueName: \"kubernetes.io/projected/eaccc4e5-8d10-4383-9aa4-576dbe31fafa-kube-api-access-vg7pz\") pod \"router-default-5444994796-76kfl\" (UID: \"eaccc4e5-8d10-4383-9aa4-576dbe31fafa\") " pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.932730 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.954695 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db1d9ab3-f304-42b2-9535-e41c50ce108c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bq8dk\" (UID: \"db1d9ab3-f304-42b2-9535-e41c50ce108c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955081 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/564dba5a-f688-4ee3-9f4c-799539db7890-srv-cert\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955108 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhfw\" (UniqueName: \"kubernetes.io/projected/aa1047b1-fd57-45c9-b262-9f087572e514-kube-api-access-9fhfw\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955131 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-csi-data-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955159 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f8f4a28-ea66-4a2a-8cc8-ad845efd3266-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgpqw\" (UID: \"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955199 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-registration-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955214 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-metrics-tls\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955241 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbcwq\" (UniqueName: \"kubernetes.io/projected/71e0f963-52a3-45c7-a104-bc2a081c6e8e-kube-api-access-wbcwq\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955255 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnjf7\" (UniqueName: \"kubernetes.io/projected/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-kube-api-access-vnjf7\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955270 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-config-volume\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955285 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71e0f963-52a3-45c7-a104-bc2a081c6e8e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955301 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdlq9\" (UniqueName: \"kubernetes.io/projected/77e31af0-1176-4696-8d10-2a8425e75077-kube-api-access-pdlq9\") pod \"ingress-canary-twv27\" (UID: \"77e31af0-1176-4696-8d10-2a8425e75077\") " pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955317 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-node-bootstrap-token\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955332 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d848b2e-47e2-431b-aff4-db85a17027ac-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955350 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-mountpoint-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955371 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-socket-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955409 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-certs\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955424 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4wj\" (UniqueName: \"kubernetes.io/projected/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-kube-api-access-ww4wj\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955439 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1047b1-fd57-45c9-b262-9f087572e514-config\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955467 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e31af0-1176-4696-8d10-2a8425e75077-cert\") pod \"ingress-canary-twv27\" (UID: \"77e31af0-1176-4696-8d10-2a8425e75077\") " pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955480 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/71e0f963-52a3-45c7-a104-bc2a081c6e8e-ready\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955497 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g82cc\" (UniqueName: \"kubernetes.io/projected/564dba5a-f688-4ee3-9f4c-799539db7890-kube-api-access-g82cc\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955516 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7lxp\" (UniqueName: \"kubernetes.io/projected/db1d9ab3-f304-42b2-9535-e41c50ce108c-kube-api-access-j7lxp\") pod \"multus-admission-controller-857f4d67dd-bq8dk\" (UID: \"db1d9ab3-f304-42b2-9535-e41c50ce108c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955533 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz2mp\" (UniqueName: \"kubernetes.io/projected/e0858db7-f7bb-4ef4-a46e-09dae35d6030-kube-api-access-hz2mp\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955562 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d848b2e-47e2-431b-aff4-db85a17027ac-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955582 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0858db7-f7bb-4ef4-a46e-09dae35d6030-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955599 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf65941-df6f-4ba2-970a-b1346853d39b-proxy-tls\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955613 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-plugins-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955635 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955652 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nh5\" (UniqueName: \"kubernetes.io/projected/41cbacfd-0181-45a3-86c2-8a51f672a37d-kube-api-access-d4nh5\") pod \"migrator-59844c95c7-ssxwb\" (UID: \"41cbacfd-0181-45a3-86c2-8a51f672a37d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955669 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71e0f963-52a3-45c7-a104-bc2a081c6e8e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955683 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fbf65941-df6f-4ba2-970a-b1346853d39b-images\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955703 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa1047b1-fd57-45c9-b262-9f087572e514-serving-cert\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955725 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbf65941-df6f-4ba2-970a-b1346853d39b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955739 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/564dba5a-f688-4ee3-9f4c-799539db7890-profile-collector-cert\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955762 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw5n4\" (UniqueName: \"kubernetes.io/projected/fbf65941-df6f-4ba2-970a-b1346853d39b-kube-api-access-pw5n4\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955788 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftqvr\" (UniqueName: \"kubernetes.io/projected/1661ef8f-2020-4c65-8228-434786300314-kube-api-access-ftqvr\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955813 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d848b2e-47e2-431b-aff4-db85a17027ac-config\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955838 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0858db7-f7bb-4ef4-a46e-09dae35d6030-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.955856 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8sc\" (UniqueName: \"kubernetes.io/projected/5f8f4a28-ea66-4a2a-8cc8-ad845efd3266-kube-api-access-9t8sc\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgpqw\" (UID: \"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.956500 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-config-volume\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.956551 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71e0f963-52a3-45c7-a104-bc2a081c6e8e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.957561 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-csi-data-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.958949 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1047b1-fd57-45c9-b262-9f087572e514-config\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.959428 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-socket-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.959744 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbf65941-df6f-4ba2-970a-b1346853d39b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.959744 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d848b2e-47e2-431b-aff4-db85a17027ac-config\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.960193 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0858db7-f7bb-4ef4-a46e-09dae35d6030-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: E0314 05:34:58.960439 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.46042722 +0000 UTC m=+153.498687966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.961014 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-mountpoint-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.961101 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-plugins-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.961200 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1661ef8f-2020-4c65-8228-434786300314-registration-dir\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.961346 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/71e0f963-52a3-45c7-a104-bc2a081c6e8e-ready\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.961628 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71e0f963-52a3-45c7-a104-bc2a081c6e8e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.962110 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-metrics-tls\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.962583 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db1d9ab3-f304-42b2-9535-e41c50ce108c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bq8dk\" (UID: \"db1d9ab3-f304-42b2-9535-e41c50ce108c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.962616 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fbf65941-df6f-4ba2-970a-b1346853d39b-images\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.969721 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa1047b1-fd57-45c9-b262-9f087572e514-serving-cert\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.970439 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8557h\" (UniqueName: \"kubernetes.io/projected/7767bcc0-c4e6-4e9e-ba1f-5286b7263f44-kube-api-access-8557h\") pod \"package-server-manager-789f6589d5-k2dxk\" (UID: \"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.973227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e31af0-1176-4696-8d10-2a8425e75077-cert\") pod \"ingress-canary-twv27\" (UID: \"77e31af0-1176-4696-8d10-2a8425e75077\") " pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.973527 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0858db7-f7bb-4ef4-a46e-09dae35d6030-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.975647 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.980176 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/564dba5a-f688-4ee3-9f4c-799539db7890-profile-collector-cert\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.982706 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/564dba5a-f688-4ee3-9f4c-799539db7890-srv-cert\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.986396 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z5ltz"] Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.987717 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f8f4a28-ea66-4a2a-8cc8-ad845efd3266-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgpqw\" (UID: \"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.991035 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.996222 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7"] Mar 14 05:34:58 crc kubenswrapper[4817]: I0314 05:34:58.996256 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.042487 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d848b2e-47e2-431b-aff4-db85a17027ac-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.042801 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-node-bootstrap-token\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.042982 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-certs\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.043236 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3740912-6b07-47f2-80f2-7f62e67cbef2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.043456 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-config\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.043742 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-certificates\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.043800 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.044758 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-trusted-ca\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.047334 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1555cef9-f674-40c5-8edf-e0a02cda8d4b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.048958 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3740912-6b07-47f2-80f2-7f62e67cbef2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.050369 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fbf65941-df6f-4ba2-970a-b1346853d39b-proxy-tls\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.052256 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1555cef9-f674-40c5-8edf-e0a02cda8d4b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.054643 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndlft\" (UniqueName: \"kubernetes.io/projected/536efb61-e25f-4be4-88b8-e2a7f7e0df84-kube-api-access-ndlft\") pod \"machine-config-controller-84d6567774-ktxdp\" (UID: \"536efb61-e25f-4be4-88b8-e2a7f7e0df84\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.055151 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c785e61-52c7-494e-8cf9-a6d3bf4bae9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lvcvt\" (UID: \"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.055495 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-tls\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.056362 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfs6j\" (UniqueName: \"kubernetes.io/projected/a3740912-6b07-47f2-80f2-7f62e67cbef2-kube-api-access-bfs6j\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.056764 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.057610 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3740912-6b07-47f2-80f2-7f62e67cbef2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q9gdj\" (UID: \"a3740912-6b07-47f2-80f2-7f62e67cbef2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.057882 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.557849822 +0000 UTC m=+153.596110578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.061164 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.061597 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.56158396 +0000 UTC m=+153.599844706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.063535 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1b60a8f-12a6-4129-9b96-2b69e788111b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.064309 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7p56\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-kube-api-access-b7p56\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.072547 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rnd\" (UniqueName: \"kubernetes.io/projected/b35f8ad5-461a-4c6c-aba1-56b3358990f8-kube-api-access-56rnd\") pod \"auto-csr-approver-29557774-2vhw8\" (UID: \"b35f8ad5-461a-4c6c-aba1-56b3358990f8\") " pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.083302 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-bound-sa-token\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.094585 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaccc4e5_8d10_4383_9aa4_576dbe31fafa.slice/crio-9f7eab3c63eba8e3bb9eddd4cd614efd9c01be579c6cfd637de14d33a2bfdab1 WatchSource:0}: Error finding container 9f7eab3c63eba8e3bb9eddd4cd614efd9c01be579c6cfd637de14d33a2bfdab1: Status 404 returned error can't find the container with id 9f7eab3c63eba8e3bb9eddd4cd614efd9c01be579c6cfd637de14d33a2bfdab1 Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.101109 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbd22\" (UniqueName: \"kubernetes.io/projected/5624b850-5ca9-47d2-82e9-52bbc3829bc5-kube-api-access-xbd22\") pod \"marketplace-operator-79b997595-5cqlk\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.111182 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-f772q"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.112279 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.113306 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk"] Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.125268 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6b916de_657d_4285_80a7_e22fe89dd6f8.slice/crio-b0234379a1234d2af43a59302fe423edd613aa98e6ac46f5d4422ccfde52a4da WatchSource:0}: Error finding container b0234379a1234d2af43a59302fe423edd613aa98e6ac46f5d4422ccfde52a4da: Status 404 returned error can't find the container with id b0234379a1234d2af43a59302fe423edd613aa98e6ac46f5d4422ccfde52a4da Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.130452 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqmdm\" (UniqueName: \"kubernetes.io/projected/6bbd1345-183f-44c1-ba4c-60d4bc2d34dd-kube-api-access-tqmdm\") pod \"service-ca-9c57cc56f-4khbc\" (UID: \"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.163012 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.163452 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.663436791 +0000 UTC m=+153.701697537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.163573 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8sc\" (UniqueName: \"kubernetes.io/projected/5f8f4a28-ea66-4a2a-8cc8-ad845efd3266-kube-api-access-9t8sc\") pod \"control-plane-machine-set-operator-78cbb6b69f-mgpqw\" (UID: \"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.173821 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1555cef9-f674-40c5-8edf-e0a02cda8d4b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mm5dj\" (UID: \"1555cef9-f674-40c5-8edf-e0a02cda8d4b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.187928 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.188675 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbcwq\" (UniqueName: \"kubernetes.io/projected/71e0f963-52a3-45c7-a104-bc2a081c6e8e-kube-api-access-wbcwq\") pod \"cni-sysctl-allowlist-ds-pk9ws\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.199185 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gcsxl"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.204271 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnjf7\" (UniqueName: \"kubernetes.io/projected/f686c00d-9b3f-4ad0-a44a-2a27218f9d3c-kube-api-access-vnjf7\") pod \"dns-default-zc2cb\" (UID: \"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c\") " pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.230833 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdlq9\" (UniqueName: \"kubernetes.io/projected/77e31af0-1176-4696-8d10-2a8425e75077-kube-api-access-pdlq9\") pod \"ingress-canary-twv27\" (UID: \"77e31af0-1176-4696-8d10-2a8425e75077\") " pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.236092 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5mbx5"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.241931 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jx7hb"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.251436 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhfw\" (UniqueName: \"kubernetes.io/projected/aa1047b1-fd57-45c9-b262-9f087572e514-kube-api-access-9fhfw\") pod \"service-ca-operator-777779d784-x4mwq\" (UID: \"aa1047b1-fd57-45c9-b262-9f087572e514\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.258918 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.265450 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.265760 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.765749294 +0000 UTC m=+153.804010040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.265950 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.273483 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw5n4\" (UniqueName: \"kubernetes.io/projected/fbf65941-df6f-4ba2-970a-b1346853d39b-kube-api-access-pw5n4\") pod \"machine-config-operator-74547568cd-4ztcp\" (UID: \"fbf65941-df6f-4ba2-970a-b1346853d39b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.282303 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.284333 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftqvr\" (UniqueName: \"kubernetes.io/projected/1661ef8f-2020-4c65-8228-434786300314-kube-api-access-ftqvr\") pod \"csi-hostpathplugin-6mx7t\" (UID: \"1661ef8f-2020-4c65-8228-434786300314\") " pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.313188 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.313525 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8hzr5"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.313558 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.316025 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.316529 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.322509 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.322951 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7zj7n"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.325833 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jnxpm"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.332673 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7lxp\" (UniqueName: \"kubernetes.io/projected/db1d9ab3-f304-42b2-9535-e41c50ce108c-kube-api-access-j7lxp\") pod \"multus-admission-controller-857f4d67dd-bq8dk\" (UID: \"db1d9ab3-f304-42b2-9535-e41c50ce108c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.334708 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.342133 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.347267 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4wj\" (UniqueName: \"kubernetes.io/projected/06ed05d5-4daf-4adf-8f01-bdfc8acf2490-kube-api-access-ww4wj\") pod \"machine-config-server-9bjwk\" (UID: \"06ed05d5-4daf-4adf-8f01-bdfc8acf2490\") " pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.355155 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.359761 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.362406 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz2mp\" (UniqueName: \"kubernetes.io/projected/e0858db7-f7bb-4ef4-a46e-09dae35d6030-kube-api-access-hz2mp\") pod \"kube-storage-version-migrator-operator-b67b599dd-tgp8z\" (UID: \"e0858db7-f7bb-4ef4-a46e-09dae35d6030\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.364422 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g82cc\" (UniqueName: \"kubernetes.io/projected/564dba5a-f688-4ee3-9f4c-799539db7890-kube-api-access-g82cc\") pod \"catalog-operator-68c6474976-gmsz9\" (UID: \"564dba5a-f688-4ee3-9f4c-799539db7890\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.367023 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.368742 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.868717817 +0000 UTC m=+153.906978553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.371323 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.379722 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.379989 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-twv27" Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.381311 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac384b90_5e6b_4477_b71a_8a8a56a29896.slice/crio-4ef82e82aec93936236e7322113f83734e45d10ee3e0b9f4c0d29cdcdcd5cd4f WatchSource:0}: Error finding container 4ef82e82aec93936236e7322113f83734e45d10ee3e0b9f4c0d29cdcdcd5cd4f: Status 404 returned error can't find the container with id 4ef82e82aec93936236e7322113f83734e45d10ee3e0b9f4c0d29cdcdcd5cd4f Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.383229 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.387268 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d848b2e-47e2-431b-aff4-db85a17027ac-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vftwz\" (UID: \"5d848b2e-47e2-431b-aff4-db85a17027ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.390324 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9bjwk" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.402090 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.408752 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nh5\" (UniqueName: \"kubernetes.io/projected/41cbacfd-0181-45a3-86c2-8a51f672a37d-kube-api-access-d4nh5\") pod \"migrator-59844c95c7-ssxwb\" (UID: \"41cbacfd-0181-45a3-86c2-8a51f672a37d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.414427 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zc2cb" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.419238 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.441383 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wbttc"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.442920 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rt8qf"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.457658 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86l7w"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.458390 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.470055 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.470364 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:34:59.970351561 +0000 UTC m=+154.008612317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.501680 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-76kfl" event={"ID":"eaccc4e5-8d10-4383-9aa4-576dbe31fafa","Type":"ContainerStarted","Data":"9f7eab3c63eba8e3bb9eddd4cd614efd9c01be579c6cfd637de14d33a2bfdab1"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.503342 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.503926 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gcsxl" event={"ID":"3b721585-56d3-4382-b93d-c70296e6d223","Type":"ContainerStarted","Data":"57b249fd5452dc0ecb4b157c681bda36379afa454bcfdd051436483004d9abe8"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.504719 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" event={"ID":"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44","Type":"ContainerStarted","Data":"72079fc26756b33b47c9b78d4be1de74be4324b57896ebe5caa6dc88eb4b2c9e"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.505395 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" event={"ID":"9c53feda-784d-431d-bbd6-528333b58935","Type":"ContainerStarted","Data":"b8ae9663f3684b1826ebd5a763aac52e3410d7d05cf97efab65ef9d4d8e04163"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.506047 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" event={"ID":"83bda382-2788-43b8-b149-9e8319aaa2c9","Type":"ContainerStarted","Data":"0bede0942f85a71db4682d08d2aad596798ac635bfb96d1db0c4433be694be1b"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.507965 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" event={"ID":"0eae806d-94d0-4c92-a008-3853f85933a9","Type":"ContainerStarted","Data":"a97b664d26111dd3e83ebd718cdc9d8ab6fdcb4454e6a2ec227136b341cc8603"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.509444 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" event={"ID":"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d","Type":"ContainerStarted","Data":"73360589880f2c0789a5f1fd6817610258144c1f8845d528210aa24762ca6ced"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.510817 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" event={"ID":"a3740912-6b07-47f2-80f2-7f62e67cbef2","Type":"ContainerStarted","Data":"6231405ceafef361e630f9936a36d1bf08e49d8e9209b90335216213b5c076c8"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.511447 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" event={"ID":"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4","Type":"ContainerStarted","Data":"f11fe5e1508b71d49e43f647d233538d1c7129eb36c18c0737b9cc08f2d7dd57"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.513081 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jnxpm" event={"ID":"ac384b90-5e6b-4477-b71a-8a8a56a29896","Type":"ContainerStarted","Data":"4ef82e82aec93936236e7322113f83734e45d10ee3e0b9f4c0d29cdcdcd5cd4f"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.513738 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" event={"ID":"2e1f15d1-2628-4fa2-b571-b39051f128d2","Type":"ContainerStarted","Data":"e3642dc7b4fe6d31a1d9a220ff38bdcaf25c4b9636d222635854b1deb056e67d"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.514363 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" event={"ID":"4b50e6b0-0310-4c00-9fe4-b7d987811711","Type":"ContainerStarted","Data":"6fbe8e65b11e1c09f2f3534b5316f9c511671f9b14f9356b790b3b0466b5c4b7"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.514930 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" event={"ID":"bf21823d-0caa-409b-8cf7-47de479e404d","Type":"ContainerStarted","Data":"39f5ae7d388508d3a253a712de917ad75c536380c5577ae09bf1493ef8fe857d"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.517848 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" event={"ID":"ee782dd8-7162-4e94-a2c6-5af0c4596ecf","Type":"ContainerStarted","Data":"03622b927d06a1caeeb86eb671058bc228b2514797e0b50bcedd7fc4e1e45082"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.518742 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" event={"ID":"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7","Type":"ContainerStarted","Data":"ad476209b1017880ed3e72b4dbb0908aa3227ec5ce4f5442cdeed550a06a8a53"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.519518 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" event={"ID":"c6b916de-657d-4285-80a7-e22fe89dd6f8","Type":"ContainerStarted","Data":"b0234379a1234d2af43a59302fe423edd613aa98e6ac46f5d4422ccfde52a4da"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.520593 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" event={"ID":"30a068b8-6cf5-4f60-9a4b-89c40b37ad32","Type":"ContainerStarted","Data":"72bd7e80f1abd14e784322313489678227d4087b615c81c824418703e4aa3f59"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.521402 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" event={"ID":"61083608-8b3b-4c98-a236-0d3f1d26d3b5","Type":"ContainerStarted","Data":"d87571f55dc57a5f776f6921c29260ac588d30082f1c4d470f00a13242c37d6b"} Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.571487 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.571784 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.071770338 +0000 UTC m=+154.110031084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.575434 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp"] Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.577643 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fc456eb_fbb1_42c7_88fa_f7a3e13a397a.slice/crio-2c7ce9881b260dc51223ace6dd2ad0a515a9e39cc96127b58de6c86a3aed4309 WatchSource:0}: Error finding container 2c7ce9881b260dc51223ace6dd2ad0a515a9e39cc96127b58de6c86a3aed4309: Status 404 returned error can't find the container with id 2c7ce9881b260dc51223ace6dd2ad0a515a9e39cc96127b58de6c86a3aed4309 Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.577981 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d625077_3908_4a2f_a87e_5631a1a2a450.slice/crio-f2115555f97cf112f910e5dbcbe1dff3f6712cbd60bb0ae6fad13789fc55db21 WatchSource:0}: Error finding container f2115555f97cf112f910e5dbcbe1dff3f6712cbd60bb0ae6fad13789fc55db21: Status 404 returned error can't find the container with id f2115555f97cf112f910e5dbcbe1dff3f6712cbd60bb0ae6fad13789fc55db21 Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.632362 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdb37610_64a2_43d9_98b1_513a60b6de4d.slice/crio-39bee6d4b7f6587f967af9c30c936d1633a598a19707fc35a4c7d5fe81bd64d2 WatchSource:0}: Error finding container 39bee6d4b7f6587f967af9c30c936d1633a598a19707fc35a4c7d5fe81bd64d2: Status 404 returned error can't find the container with id 39bee6d4b7f6587f967af9c30c936d1633a598a19707fc35a4c7d5fe81bd64d2 Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.634349 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.634870 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4da22dbe_ae9a_4df9_a16c_76b248ef22b4.slice/crio-3580995f86d9dfe6141c388973257a1e9d436554f932c050b2d229d7aa8ff7ac WatchSource:0}: Error finding container 3580995f86d9dfe6141c388973257a1e9d436554f932c050b2d229d7aa8ff7ac: Status 404 returned error can't find the container with id 3580995f86d9dfe6141c388973257a1e9d436554f932c050b2d229d7aa8ff7ac Mar 14 05:34:59 crc kubenswrapper[4817]: W0314 05:34:59.635085 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ef409d_9eab_4e6a_8009_4f06ca780969.slice/crio-a7ac5375241f5485f692c9a029911f1130933a45a674872bd21e78f7c5b705e0 WatchSource:0}: Error finding container a7ac5375241f5485f692c9a029911f1130933a45a674872bd21e78f7c5b705e0: Status 404 returned error can't find the container with id a7ac5375241f5485f692c9a029911f1130933a45a674872bd21e78f7c5b705e0 Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.648722 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.662482 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.672943 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.673257 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.173246607 +0000 UTC m=+154.211507353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.697415 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.776964 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.777254 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.277230709 +0000 UTC m=+154.315491455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.777623 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.777978 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.277963961 +0000 UTC m=+154.316224707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.880514 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw"] Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.880672 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.880910 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.380876862 +0000 UTC m=+154.419137598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.881372 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.881835 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.381827329 +0000 UTC m=+154.420088075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.983572 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.983962 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.483935267 +0000 UTC m=+154.522196013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:34:59 crc kubenswrapper[4817]: I0314 05:34:59.984059 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:34:59 crc kubenswrapper[4817]: E0314 05:34:59.984454 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.484437451 +0000 UTC m=+154.522698187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.084623 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.085041 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.585024885 +0000 UTC m=+154.623285631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.185836 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.186211 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.686196796 +0000 UTC m=+154.724457552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.210663 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-twv27"] Mar 14 05:35:00 crc kubenswrapper[4817]: W0314 05:35:00.221699 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f8f4a28_ea66_4a2a_8cc8_ad845efd3266.slice/crio-c0984e652c512257fb8d8cac8aff73e628129ac96613fa466322065e4920a25e WatchSource:0}: Error finding container c0984e652c512257fb8d8cac8aff73e628129ac96613fa466322065e4920a25e: Status 404 returned error can't find the container with id c0984e652c512257fb8d8cac8aff73e628129ac96613fa466322065e4920a25e Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.243560 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.270394 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.287461 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.287605 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.787577152 +0000 UTC m=+154.825837898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.291238 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.291651 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.791638 +0000 UTC m=+154.829898746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.298313 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4khbc"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.315007 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5cqlk"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.318550 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557774-2vhw8"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.349221 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bq8dk"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.368708 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.392600 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.392927 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.892912103 +0000 UTC m=+154.931172849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: W0314 05:35:00.407337 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c785e61_52c7_494e_8cf9_a6d3bf4bae9f.slice/crio-71f55e017cb45c5c65ec521a2b20a62fb8a330662a2f774f571d1abe9e964318 WatchSource:0}: Error finding container 71f55e017cb45c5c65ec521a2b20a62fb8a330662a2f774f571d1abe9e964318: Status 404 returned error can't find the container with id 71f55e017cb45c5c65ec521a2b20a62fb8a330662a2f774f571d1abe9e964318 Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.446705 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq"] Mar 14 05:35:00 crc kubenswrapper[4817]: W0314 05:35:00.476357 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb1d9ab3_f304_42b2_9535_e41c50ce108c.slice/crio-13490ebe183206214b30dd72f9c9a0c0cf9ef14600a94d852421406668770a4f WatchSource:0}: Error finding container 13490ebe183206214b30dd72f9c9a0c0cf9ef14600a94d852421406668770a4f: Status 404 returned error can't find the container with id 13490ebe183206214b30dd72f9c9a0c0cf9ef14600a94d852421406668770a4f Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.494690 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.495066 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:00.995052942 +0000 UTC m=+155.033313688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.500334 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 05:35:00 crc kubenswrapper[4817]: W0314 05:35:00.540501 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa1047b1_fd57_45c9_b262_9f087572e514.slice/crio-9f07c1a1ca2bb1389a5cbf257e42439aa024b03939b0d34becb32d1e09d5db7c WatchSource:0}: Error finding container 9f07c1a1ca2bb1389a5cbf257e42439aa024b03939b0d34becb32d1e09d5db7c: Status 404 returned error can't find the container with id 9f07c1a1ca2bb1389a5cbf257e42439aa024b03939b0d34becb32d1e09d5db7c Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.590374 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-76kfl" event={"ID":"eaccc4e5-8d10-4383-9aa4-576dbe31fafa","Type":"ContainerStarted","Data":"2e985e7f1b6f213d3369fc5543f40202238389f5aa99eafb5d633af25abf57e5"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.598195 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.598563 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.09854206 +0000 UTC m=+155.136802806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.599369 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" event={"ID":"536efb61-e25f-4be4-88b8-e2a7f7e0df84","Type":"ContainerStarted","Data":"0418f5e98a49ff1a8444fc6cc54964c2b23d949a5e2e563a1da9a9ce2132888b"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.633633 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gcsxl" event={"ID":"3b721585-56d3-4382-b93d-c70296e6d223","Type":"ContainerStarted","Data":"1e37a1c2d6625fa16835bd774773e462592eef96a43479830195e5ec5fca4282"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.633812 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.652445 4817 patch_prober.go:28] interesting pod/downloads-7954f5f757-gcsxl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.652534 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gcsxl" podUID="3b721585-56d3-4382-b93d-c70296e6d223" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.659013 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" event={"ID":"b35f8ad5-461a-4c6c-aba1-56b3358990f8","Type":"ContainerStarted","Data":"64917e03e397e23ce24be6b0e9bd4f2394907ac4a51464d6699bb3b132bdd4d5"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.674945 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-twv27" event={"ID":"77e31af0-1176-4696-8d10-2a8425e75077","Type":"ContainerStarted","Data":"82327a3200980c310ccd3cedb9d7a501224cab95f6338fb3917e889059a02c53"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.710245 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" event={"ID":"ee782dd8-7162-4e94-a2c6-5af0c4596ecf","Type":"ContainerStarted","Data":"c655d07d6850e72e8c30517e81318508971d2e2b4c7e258de593a1169cb307c7"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.710589 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.710922 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.711087 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.211067089 +0000 UTC m=+155.249327835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.728306 4817 patch_prober.go:28] interesting pod/console-operator-58897d9998-jx7hb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.728357 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" podUID="ee782dd8-7162-4e94-a2c6-5af0c4596ecf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.767249 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6mx7t"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.767353 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" event={"ID":"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f","Type":"ContainerStarted","Data":"71f55e017cb45c5c65ec521a2b20a62fb8a330662a2f774f571d1abe9e964318"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.775723 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zc2cb"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.777408 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" event={"ID":"4da22dbe-ae9a-4df9-a16c-76b248ef22b4","Type":"ContainerStarted","Data":"3580995f86d9dfe6141c388973257a1e9d436554f932c050b2d229d7aa8ff7ac"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.788483 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" event={"ID":"3d625077-3908-4a2f-a87e-5631a1a2a450","Type":"ContainerStarted","Data":"f2115555f97cf112f910e5dbcbe1dff3f6712cbd60bb0ae6fad13789fc55db21"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.806112 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" event={"ID":"0eae806d-94d0-4c92-a008-3853f85933a9","Type":"ContainerStarted","Data":"0db0cc23a071e9325f891c1440bb6d66f3c18f3a8770a564c50f271f35c02154"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.812700 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.815192 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.315167564 +0000 UTC m=+155.353428350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.819982 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.839429 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" event={"ID":"2e334958-5906-4f73-b6ef-c4634ae491b0","Type":"ContainerStarted","Data":"e2cc102b9bf64cbeb7314c457d0cc67fa00c0e063b696805bc71d8c515e610ac"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.842846 4817 generic.go:334] "Generic (PLEG): container finished" podID="4b50e6b0-0310-4c00-9fe4-b7d987811711" containerID="3ffac0361ed30e9c584edc3311067254de6eea69e5b66080b3f0cc60f1a4294f" exitCode=0 Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.843598 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" event={"ID":"4b50e6b0-0310-4c00-9fe4-b7d987811711","Type":"ContainerDied","Data":"3ffac0361ed30e9c584edc3311067254de6eea69e5b66080b3f0cc60f1a4294f"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.851780 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" event={"ID":"71e0f963-52a3-45c7-a104-bc2a081c6e8e","Type":"ContainerStarted","Data":"0cb600e8af5c5e3e70b1b06aa1ab1268afd674d7527c6d613f10137a33cda00d"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.859036 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" event={"ID":"bf21823d-0caa-409b-8cf7-47de479e404d","Type":"ContainerStarted","Data":"0eaf1395816ecf0824a0e3852383ff62b57db34ec8c5aa903abc3827504e02e8"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.862945 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9bjwk" event={"ID":"06ed05d5-4daf-4adf-8f01-bdfc8acf2490","Type":"ContainerStarted","Data":"386b6c8d9a663beb09642d026a91c4eafa566d6988f24a3ac04867c5305916a8"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.886241 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" event={"ID":"564dba5a-f688-4ee3-9f4c-799539db7890","Type":"ContainerStarted","Data":"2cac2d8698329fc5d4704a09e773bad241dcbaf2e7e07df9344d7bd7f4be1ef2"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.889782 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" event={"ID":"fbf65941-df6f-4ba2-970a-b1346853d39b","Type":"ContainerStarted","Data":"361a566c32659a4dba433dbe4ee4c40781dd1d14a0868fb7085b9e6eaa31a0a3"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.893303 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.894762 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" event={"ID":"79ef409d-9eab-4e6a-8009-4f06ca780969","Type":"ContainerStarted","Data":"a7ac5375241f5485f692c9a029911f1130933a45a674872bd21e78f7c5b705e0"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.904930 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" event={"ID":"30a068b8-6cf5-4f60-9a4b-89c40b37ad32","Type":"ContainerStarted","Data":"f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.905185 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.910974 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb"] Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.914625 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:00 crc kubenswrapper[4817]: E0314 05:35:00.915327 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.415314575 +0000 UTC m=+155.453575321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.920969 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" event={"ID":"5624b850-5ca9-47d2-82e9-52bbc3829bc5","Type":"ContainerStarted","Data":"f0d4f9ca813ff045ea943578863d16ff2f953332210851e7c78fd41c2cce5b16"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.927478 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" event={"ID":"1555cef9-f674-40c5-8edf-e0a02cda8d4b","Type":"ContainerStarted","Data":"9ef094511a2698d011868712231a2283a0606cbe34a3d2ced92332193094096b"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.935464 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" event={"ID":"db1d9ab3-f304-42b2-9535-e41c50ce108c","Type":"ContainerStarted","Data":"13490ebe183206214b30dd72f9c9a0c0cf9ef14600a94d852421406668770a4f"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.944686 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" event={"ID":"9c53feda-784d-431d-bbd6-528333b58935","Type":"ContainerStarted","Data":"277581cc94d993eeb2b1308ce8e9bf409783746c2a85d50b0f3173a906a6cade"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.949618 4817 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jsk5j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.949682 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" podUID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.955505 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" event={"ID":"c6b916de-657d-4285-80a7-e22fe89dd6f8","Type":"ContainerStarted","Data":"8fa3246e9b10b4468fe4f5d5a00f596bae7516c488ec5ff14b652c6f8e2b1509"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.965277 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" event={"ID":"cdb37610-64a2-43d9-98b1-513a60b6de4d","Type":"ContainerStarted","Data":"39bee6d4b7f6587f967af9c30c936d1633a598a19707fc35a4c7d5fe81bd64d2"} Mar 14 05:35:00 crc kubenswrapper[4817]: W0314 05:35:00.965402 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf686c00d_9b3f_4ad0_a44a_2a27218f9d3c.slice/crio-d2b0991d0ddeb613dd59924f00d4c579f41205244a96972cf2d71c210223c154 WatchSource:0}: Error finding container d2b0991d0ddeb613dd59924f00d4c579f41205244a96972cf2d71c210223c154: Status 404 returned error can't find the container with id d2b0991d0ddeb613dd59924f00d4c579f41205244a96972cf2d71c210223c154 Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.974099 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" event={"ID":"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a","Type":"ContainerStarted","Data":"2c7ce9881b260dc51223ace6dd2ad0a515a9e39cc96127b58de6c86a3aed4309"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.976327 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" event={"ID":"83bda382-2788-43b8-b149-9e8319aaa2c9","Type":"ContainerStarted","Data":"9bf580fadd8860da3c2d60ffb4299f081e855e82d10d81209263600ca0cf3f25"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.977403 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" event={"ID":"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266","Type":"ContainerStarted","Data":"c0984e652c512257fb8d8cac8aff73e628129ac96613fa466322065e4920a25e"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.978838 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" event={"ID":"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd","Type":"ContainerStarted","Data":"097dbdd625ca9911e47f95646fd967a2cc37619261c66f8f37f3ce277a5d6462"} Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.991734 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.999074 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:00 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:00 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:00 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:00 crc kubenswrapper[4817]: I0314 05:35:00.999266 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.024111 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.025354 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.525336182 +0000 UTC m=+155.563596918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: W0314 05:35:01.031333 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0858db7_f7bb_4ef4_a46e_09dae35d6030.slice/crio-19f31e282939358a753fd8f080ca14266d1acaf0f8dc96cd10c02d0f31fa8296 WatchSource:0}: Error finding container 19f31e282939358a753fd8f080ca14266d1acaf0f8dc96cd10c02d0f31fa8296: Status 404 returned error can't find the container with id 19f31e282939358a753fd8f080ca14266d1acaf0f8dc96cd10c02d0f31fa8296 Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.126464 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.126879 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.626862673 +0000 UTC m=+155.665123419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.200208 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-f772q" podStartSLOduration=91.200193817 podStartE2EDuration="1m31.200193817s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.19824632 +0000 UTC m=+155.236507076" watchObservedRunningTime="2026-03-14 05:35:01.200193817 +0000 UTC m=+155.238454563" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.227590 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.227716 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.727698213 +0000 UTC m=+155.765958959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.228088 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.228351 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.728343332 +0000 UTC m=+155.766604078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.240035 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-z5ltz" podStartSLOduration=91.239968389 podStartE2EDuration="1m31.239968389s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.237075485 +0000 UTC m=+155.275336231" watchObservedRunningTime="2026-03-14 05:35:01.239968389 +0000 UTC m=+155.278229135" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.279479 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4r6qx" podStartSLOduration=91.279461762 podStartE2EDuration="1m31.279461762s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.277991499 +0000 UTC m=+155.316252245" watchObservedRunningTime="2026-03-14 05:35:01.279461762 +0000 UTC m=+155.317722508" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.320983 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" podStartSLOduration=91.320964174 podStartE2EDuration="1m31.320964174s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.31599879 +0000 UTC m=+155.354259536" watchObservedRunningTime="2026-03-14 05:35:01.320964174 +0000 UTC m=+155.359224920" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.329297 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.329616 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.829585654 +0000 UTC m=+155.867846400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.329987 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.330330 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.830322055 +0000 UTC m=+155.868582801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.430969 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.431081 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.931057763 +0000 UTC m=+155.969318509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.431146 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.431424 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:01.931416453 +0000 UTC m=+155.969677189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.443363 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gcsxl" podStartSLOduration=91.443343999 podStartE2EDuration="1m31.443343999s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.401031283 +0000 UTC m=+155.439292029" watchObservedRunningTime="2026-03-14 05:35:01.443343999 +0000 UTC m=+155.481604745" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.443501 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-r82p7" podStartSLOduration=91.443497053 podStartE2EDuration="1m31.443497053s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.437410307 +0000 UTC m=+155.475671053" watchObservedRunningTime="2026-03-14 05:35:01.443497053 +0000 UTC m=+155.481757799" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.482044 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" podStartSLOduration=91.482021989 podStartE2EDuration="1m31.482021989s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.480814204 +0000 UTC m=+155.519074950" watchObservedRunningTime="2026-03-14 05:35:01.482021989 +0000 UTC m=+155.520282735" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.521947 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-76kfl" podStartSLOduration=91.521930485 podStartE2EDuration="1m31.521930485s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:01.519569167 +0000 UTC m=+155.557829913" watchObservedRunningTime="2026-03-14 05:35:01.521930485 +0000 UTC m=+155.560191231" Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.532395 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.532552 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.032527842 +0000 UTC m=+156.070788598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.532665 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.532956 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.032945154 +0000 UTC m=+156.071205910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.634958 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.635081 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.135055852 +0000 UTC m=+156.173316588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.635685 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.636094 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.136086032 +0000 UTC m=+156.174346778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.738009 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.738294 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.238259051 +0000 UTC m=+156.276519797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.738572 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.738995 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.238986262 +0000 UTC m=+156.277247008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.749219 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.840010 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.840199 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.340168053 +0000 UTC m=+156.378428799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.840324 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.840818 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.340798331 +0000 UTC m=+156.379059077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.941859 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.942073 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.442045844 +0000 UTC m=+156.480306590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.942220 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:01 crc kubenswrapper[4817]: E0314 05:35:01.942575 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.442567449 +0000 UTC m=+156.480828195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.990002 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" event={"ID":"aa1047b1-fd57-45c9-b262-9f087572e514","Type":"ContainerStarted","Data":"9f07c1a1ca2bb1389a5cbf257e42439aa024b03939b0d34becb32d1e09d5db7c"} Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.994517 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" event={"ID":"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44","Type":"ContainerStarted","Data":"490335cf3f407efb64f609589f3ea8c255962e020420b78d597e023741d1bfb0"} Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.996071 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:01 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:01 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:01 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:01 crc kubenswrapper[4817]: I0314 05:35:01.996152 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.000206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" event={"ID":"3d625077-3908-4a2f-a87e-5631a1a2a450","Type":"ContainerStarted","Data":"63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.000604 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.002427 4817 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-86l7w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.002492 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.006769 4817 generic.go:334] "Generic (PLEG): container finished" podID="4da22dbe-ae9a-4df9-a16c-76b248ef22b4" containerID="9ac33195767381a8998d8d8dca7bcd01aa245dd6658abdf79078323ae6bf2e8f" exitCode=0 Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.006915 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" event={"ID":"4da22dbe-ae9a-4df9-a16c-76b248ef22b4","Type":"ContainerDied","Data":"9ac33195767381a8998d8d8dca7bcd01aa245dd6658abdf79078323ae6bf2e8f"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.011585 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" event={"ID":"2e1f15d1-2628-4fa2-b571-b39051f128d2","Type":"ContainerStarted","Data":"56bd12edace3db509e7ff6362e08abff2dc1acfffcf69f1faaa9a7c9a3a6064c"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.011773 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.013575 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" event={"ID":"1661ef8f-2020-4c65-8228-434786300314","Type":"ContainerStarted","Data":"c4cf3343625bdb7b6bb9a5ab5da0a699f176c7d152231b6d9da75abc38b45053"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.013771 4817 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lg8lf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.013814 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" podUID="2e1f15d1-2628-4fa2-b571-b39051f128d2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.015134 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" event={"ID":"5d848b2e-47e2-431b-aff4-db85a17027ac","Type":"ContainerStarted","Data":"3c128d84996f3c4b7f1d0822e9e233bbc7094d25819bec823f4473ff4f9e1648"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.019876 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" podStartSLOduration=92.019844248 podStartE2EDuration="1m32.019844248s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:02.017712386 +0000 UTC m=+156.055973162" watchObservedRunningTime="2026-03-14 05:35:02.019844248 +0000 UTC m=+156.058104994" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.020779 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" event={"ID":"2e334958-5906-4f73-b6ef-c4634ae491b0","Type":"ContainerStarted","Data":"4f6254cb64a4e2bdf2ae75d13a699108dc49ac5fe8f4d1d0ba8e5aa70693f7cb"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.025247 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" event={"ID":"536efb61-e25f-4be4-88b8-e2a7f7e0df84","Type":"ContainerStarted","Data":"78ecbc047e591bccab62922e699aab0eb447a141988c3958a45a36009b3cd00d"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.032691 4817 generic.go:334] "Generic (PLEG): container finished" podID="4fc456eb-fbb1-42c7-88fa-f7a3e13a397a" containerID="bc6dd632114508a34d0c08ddcda67f7ce9a5fe491a83f9b19bca9b4933c148a6" exitCode=0 Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.032917 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" event={"ID":"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a","Type":"ContainerDied","Data":"bc6dd632114508a34d0c08ddcda67f7ce9a5fe491a83f9b19bca9b4933c148a6"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.039932 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.039882928 podStartE2EDuration="1.039882928s" podCreationTimestamp="2026-03-14 05:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:02.034548054 +0000 UTC m=+156.072808840" watchObservedRunningTime="2026-03-14 05:35:02.039882928 +0000 UTC m=+156.078143664" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.045821 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.046031 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.546002405 +0000 UTC m=+156.584263151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.046182 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.046618 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.546606993 +0000 UTC m=+156.584867839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.053339 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" event={"ID":"41cbacfd-0181-45a3-86c2-8a51f672a37d","Type":"ContainerStarted","Data":"1e919a3b84e0d702feb9b0dc0b87cab53afcf494eb5f59827f2ff52811482eba"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.058422 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" event={"ID":"e0858db7-f7bb-4ef4-a46e-09dae35d6030","Type":"ContainerStarted","Data":"19f31e282939358a753fd8f080ca14266d1acaf0f8dc96cd10c02d0f31fa8296"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.063612 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zc2cb" event={"ID":"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c","Type":"ContainerStarted","Data":"d2b0991d0ddeb613dd59924f00d4c579f41205244a96972cf2d71c210223c154"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.065395 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" event={"ID":"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d","Type":"ContainerStarted","Data":"427ef21d83d7a3d1716475be6686dec2c9c4342e15bc5801b9174c0cff9d4d35"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.066687 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" event={"ID":"a3740912-6b07-47f2-80f2-7f62e67cbef2","Type":"ContainerStarted","Data":"b79078da9875d9f2dfb5089b79b68e3c834613a280df51cf5668a776f23ffec1"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.069341 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" event={"ID":"79ef409d-9eab-4e6a-8009-4f06ca780969","Type":"ContainerStarted","Data":"7db2f75956c1fc33ad8e2c781e3f78b321b4830295c2f2c39d42cbcd911ff2a5"} Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.071498 4817 patch_prober.go:28] interesting pod/console-operator-58897d9998-jx7hb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.071542 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" podUID="ee782dd8-7162-4e94-a2c6-5af0c4596ecf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.073215 4817 patch_prober.go:28] interesting pod/downloads-7954f5f757-gcsxl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.073251 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gcsxl" podUID="3b721585-56d3-4382-b93d-c70296e6d223" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.079355 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.099065 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" podStartSLOduration=92.099043092 podStartE2EDuration="1m32.099043092s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:02.078089405 +0000 UTC m=+156.116350161" watchObservedRunningTime="2026-03-14 05:35:02.099043092 +0000 UTC m=+156.137303838" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.100993 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q9gdj" podStartSLOduration=92.100967087 podStartE2EDuration="1m32.100967087s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:02.096169538 +0000 UTC m=+156.134430284" watchObservedRunningTime="2026-03-14 05:35:02.100967087 +0000 UTC m=+156.139227843" Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.167741 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.169993 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.669971116 +0000 UTC m=+156.708231862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.270976 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.271383 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.771363853 +0000 UTC m=+156.809624599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.372635 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.372783 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.87275924 +0000 UTC m=+156.911019986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.373257 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.374501 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.874454489 +0000 UTC m=+156.912715275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.474514 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.474845 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.974792316 +0000 UTC m=+157.013053102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.475037 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.476069 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:02.976047432 +0000 UTC m=+157.014308218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.577236 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.577480 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.077438519 +0000 UTC m=+157.115699265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.578032 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.578545 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.07852412 +0000 UTC m=+157.116784876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.679272 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.679479 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.179430813 +0000 UTC m=+157.217691559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.679834 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.680242 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.180210586 +0000 UTC m=+157.218471442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.781434 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.781604 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.281569232 +0000 UTC m=+157.319829978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.781769 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.782587 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.28254504 +0000 UTC m=+157.320805836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.883352 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.883811 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.383759952 +0000 UTC m=+157.422020708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.884132 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.885155 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.385142662 +0000 UTC m=+157.423403428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.986290 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.986589 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.486557959 +0000 UTC m=+157.524818705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:02 crc kubenswrapper[4817]: I0314 05:35:02.987177 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:02 crc kubenswrapper[4817]: E0314 05:35:02.987680 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.487663591 +0000 UTC m=+157.525924337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.011003 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:03 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:03 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:03 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.011078 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.076833 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" event={"ID":"5d848b2e-47e2-431b-aff4-db85a17027ac","Type":"ContainerStarted","Data":"e25ee72532b85acb6ceec72315381a39e5ac735512bbf9ee5b5a3cc5698dfb4b"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.087136 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" event={"ID":"536efb61-e25f-4be4-88b8-e2a7f7e0df84","Type":"ContainerStarted","Data":"dc65192577e2e5eb78db2338e6291faf45a505b5641ad00a2d971b9e2633ad0e"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.089156 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.089722 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.589705227 +0000 UTC m=+157.627965973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.134756 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" event={"ID":"fbf65941-df6f-4ba2-970a-b1346853d39b","Type":"ContainerStarted","Data":"b7cc469ca0b9701cbd74f8bde0f57581f05e9106e784ad630bd31c4035c77528"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.136189 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ktxdp" podStartSLOduration=93.136175343 podStartE2EDuration="1m33.136175343s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.13538767 +0000 UTC m=+157.173648416" watchObservedRunningTime="2026-03-14 05:35:03.136175343 +0000 UTC m=+157.174436089" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.148150 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" event={"ID":"5f8f4a28-ea66-4a2a-8cc8-ad845efd3266","Type":"ContainerStarted","Data":"cb7e0713c9f0d8437f9ad72be67dead590ace244be1b786b55084c31fb31d02e"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.153218 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vftwz" podStartSLOduration=93.153192636 podStartE2EDuration="1m33.153192636s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.112668812 +0000 UTC m=+157.150929558" watchObservedRunningTime="2026-03-14 05:35:03.153192636 +0000 UTC m=+157.191453402" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.168305 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" event={"ID":"1555cef9-f674-40c5-8edf-e0a02cda8d4b","Type":"ContainerStarted","Data":"39b49d75875f6a5b8a2db5c0027d1872e5c9f153703681b33dbcf2183fa1f2ba"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.189704 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" event={"ID":"4b50e6b0-0310-4c00-9fe4-b7d987811711","Type":"ContainerStarted","Data":"175923341ea8215bab674d3818e6ed299a6393d20e3465faba4320dbb6f5c17c"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.190232 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mgpqw" podStartSLOduration=93.190217108 podStartE2EDuration="1m33.190217108s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.188299413 +0000 UTC m=+157.226560159" watchObservedRunningTime="2026-03-14 05:35:03.190217108 +0000 UTC m=+157.228477854" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.190765 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.192640 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.692623368 +0000 UTC m=+157.730884104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.224821 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mm5dj" podStartSLOduration=93.22480661 podStartE2EDuration="1m33.22480661s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.223248445 +0000 UTC m=+157.261509191" watchObservedRunningTime="2026-03-14 05:35:03.22480661 +0000 UTC m=+157.263067356" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.269796 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" event={"ID":"79ef409d-9eab-4e6a-8009-4f06ca780969","Type":"ContainerStarted","Data":"272c7e7f40e90d52635287f90c8df53191568756ed63f8804c807b97b2f7cc1c"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.284235 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" event={"ID":"a7458d73-4b83-46dc-9be4-6c14e1b9bcd7","Type":"ContainerStarted","Data":"fc6b7cd88293934ca58ba8e2d929946454cb4ee7f79a95104196d7d5fd23d365"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.285607 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.288032 4817 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f6b68 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.288095 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" podUID="a7458d73-4b83-46dc-9be4-6c14e1b9bcd7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.291779 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.293624 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.793577222 +0000 UTC m=+157.831837968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.319429 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" event={"ID":"5624b850-5ca9-47d2-82e9-52bbc3829bc5","Type":"ContainerStarted","Data":"41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.319967 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.322928 4817 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5cqlk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.322999 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.345490 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" event={"ID":"aa1047b1-fd57-45c9-b262-9f087572e514","Type":"ContainerStarted","Data":"8de118216b217e077df1a5dcefa049150540cfd081636368ff3752a932fac9c6"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.363344 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9bjwk" event={"ID":"06ed05d5-4daf-4adf-8f01-bdfc8acf2490","Type":"ContainerStarted","Data":"31bd322df4d79ac750bec9c686694d07eb8ddf8f7b67a9d53d68d0a58c34ab37"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.380762 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" event={"ID":"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4","Type":"ContainerStarted","Data":"660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.381810 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.387819 4817 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5mbx5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.387871 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.395188 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.397198 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.897184133 +0000 UTC m=+157.935444879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.403705 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" event={"ID":"564dba5a-f688-4ee3-9f4c-799539db7890","Type":"ContainerStarted","Data":"5dd1fccc23e32d21e8d339b51b5aec77070f3e37342301b39d74414794d88755"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.404189 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.406776 4817 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gmsz9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.406902 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" podUID="564dba5a-f688-4ee3-9f4c-799539db7890" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.414846 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xnhq6" podStartSLOduration=93.414824264 podStartE2EDuration="1m33.414824264s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.401122087 +0000 UTC m=+157.439382833" watchObservedRunningTime="2026-03-14 05:35:03.414824264 +0000 UTC m=+157.453085010" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.415701 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" podStartSLOduration=93.415694379 podStartE2EDuration="1m33.415694379s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.25377963 +0000 UTC m=+157.292040376" watchObservedRunningTime="2026-03-14 05:35:03.415694379 +0000 UTC m=+157.453955125" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.448362 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" event={"ID":"4da22dbe-ae9a-4df9-a16c-76b248ef22b4","Type":"ContainerStarted","Data":"ab4015b8074691fea813542c49b900d13fb6228d5b88f3bf6aa2fb8d2f857621"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.448539 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.458984 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" event={"ID":"2e334958-5906-4f73-b6ef-c4634ae491b0","Type":"ContainerStarted","Data":"548f7e6b782c11b4b721e2dcd0f6de9cae1ab0e29b341bdc215a520de651dca2"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.478353 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.478386 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.489097 4817 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-rpr5b container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.489363 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" podUID="4b50e6b0-0310-4c00-9fe4-b7d987811711" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.493850 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" event={"ID":"2c785e61-52c7-494e-8cf9-a6d3bf4bae9f","Type":"ContainerStarted","Data":"54002d4d0d2abf672652a4ed3c614346ac27950ab7effffa472104e16c9c4270"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.496683 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.499158 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:03.999125406 +0000 UTC m=+158.037386162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.517804 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jnxpm" event={"ID":"ac384b90-5e6b-4477-b71a-8a8a56a29896","Type":"ContainerStarted","Data":"8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.519408 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zc2cb" event={"ID":"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c","Type":"ContainerStarted","Data":"41142e386fdf7771afe2df27faca711f1e4899a1276eb385ea9cc44da707997a"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.519773 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zc2cb" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.548049 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9bjwk" podStartSLOduration=7.548026483 podStartE2EDuration="7.548026483s" podCreationTimestamp="2026-03-14 05:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.488838998 +0000 UTC m=+157.527099744" watchObservedRunningTime="2026-03-14 05:35:03.548026483 +0000 UTC m=+157.586287229" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.557874 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" event={"ID":"41cbacfd-0181-45a3-86c2-8a51f672a37d","Type":"ContainerStarted","Data":"17ac00d476f03ab47190bb1c7b6b34db6d21293b8af48cb469f4fa6aefe6c681"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.574911 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" event={"ID":"6bbd1345-183f-44c1-ba4c-60d4bc2d34dd","Type":"ContainerStarted","Data":"b0952a6e6d88c87343a76bfb6defdf7ba3c37d3d024f83cfee25f6c333e517a0"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.592564 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" podStartSLOduration=93.592547612 podStartE2EDuration="1m33.592547612s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.549430363 +0000 UTC m=+157.587691109" watchObservedRunningTime="2026-03-14 05:35:03.592547612 +0000 UTC m=+157.630808358" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.594016 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x4mwq" podStartSLOduration=93.594007454 podStartE2EDuration="1m33.594007454s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.592261524 +0000 UTC m=+157.630522280" watchObservedRunningTime="2026-03-14 05:35:03.594007454 +0000 UTC m=+157.632268200" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.599880 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j"] Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.599933 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" event={"ID":"7767bcc0-c4e6-4e9e-ba1f-5286b7263f44","Type":"ContainerStarted","Data":"2e7c6d96152c4a7612e4fa220289281684d11b340864bb92e6d4cd3910c1a111"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.600176 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.601846 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" event={"ID":"61083608-8b3b-4c98-a236-0d3f1d26d3b5","Type":"ContainerStarted","Data":"7554006e080583e32060587d546219a0e5bf7749cbe62b7302dfaa38cf4d0685"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.603189 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86l7w"] Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.611858 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" event={"ID":"71e0f963-52a3-45c7-a104-bc2a081c6e8e","Type":"ContainerStarted","Data":"f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.612648 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.604079 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.604291 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.104280242 +0000 UTC m=+158.142540988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.617647 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-twv27" event={"ID":"77e31af0-1176-4696-8d10-2a8425e75077","Type":"ContainerStarted","Data":"654d5a3402eabd9d1de97083e302d61a416e66469b361f40d2e092b20cb03b1d"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.625835 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" event={"ID":"cdb37610-64a2-43d9-98b1-513a60b6de4d","Type":"ContainerStarted","Data":"5a8f548a937ceb9117b83d0d4cf8205c0bbaedacf7c23ae3253e1ef339153343"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.630876 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" event={"ID":"e0858db7-f7bb-4ef4-a46e-09dae35d6030","Type":"ContainerStarted","Data":"8e06efba137478a39e187393b98c67e0e28007f85d89b2952e314af3fac2035a"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.632837 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" event={"ID":"db1d9ab3-f304-42b2-9535-e41c50ce108c","Type":"ContainerStarted","Data":"18c8f70d6fe61eb70784d6df9f0f85710518bd8d3cb5bac1dadc50088200c877"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.641605 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" podStartSLOduration=93.641587383 podStartE2EDuration="1m33.641587383s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.639524683 +0000 UTC m=+157.677785429" watchObservedRunningTime="2026-03-14 05:35:03.641587383 +0000 UTC m=+157.679848129" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.644133 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" event={"ID":"0eae806d-94d0-4c92-a008-3853f85933a9","Type":"ContainerStarted","Data":"3589a93e43bce0a3b65dd940cda831ed7c685bcb9eed8662364287c5c1bf0b4c"} Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.644325 4817 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lg8lf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.644353 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" podUID="2e1f15d1-2628-4fa2-b571-b39051f128d2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.644496 4817 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-86l7w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.644536 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.648274 4817 patch_prober.go:28] interesting pod/console-operator-58897d9998-jx7hb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.648335 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" podUID="ee782dd8-7162-4e94-a2c6-5af0c4596ecf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.666582 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.714382 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.715862 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.215845594 +0000 UTC m=+158.254106340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.724801 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" podStartSLOduration=93.724780832 podStartE2EDuration="1m33.724780832s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.682492217 +0000 UTC m=+157.720752963" watchObservedRunningTime="2026-03-14 05:35:03.724780832 +0000 UTC m=+157.763041578" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.726000 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" podStartSLOduration=93.725992057 podStartE2EDuration="1m33.725992057s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.724498104 +0000 UTC m=+157.762758850" watchObservedRunningTime="2026-03-14 05:35:03.725992057 +0000 UTC m=+157.764252803" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.757192 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" podStartSLOduration=93.757174791 podStartE2EDuration="1m33.757174791s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.753618078 +0000 UTC m=+157.791878824" watchObservedRunningTime="2026-03-14 05:35:03.757174791 +0000 UTC m=+157.795435537" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.793073 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" podStartSLOduration=93.79305563 podStartE2EDuration="1m33.79305563s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.791550706 +0000 UTC m=+157.829811472" watchObservedRunningTime="2026-03-14 05:35:03.79305563 +0000 UTC m=+157.831316376" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.814679 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-twv27" podStartSLOduration=7.814663226 podStartE2EDuration="7.814663226s" podCreationTimestamp="2026-03-14 05:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.814639795 +0000 UTC m=+157.852900541" watchObservedRunningTime="2026-03-14 05:35:03.814663226 +0000 UTC m=+157.852923962" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.816724 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.818931 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.318915679 +0000 UTC m=+158.357176425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.846040 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podStartSLOduration=7.846026324 podStartE2EDuration="7.846026324s" podCreationTimestamp="2026-03-14 05:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.844432018 +0000 UTC m=+157.882692764" watchObservedRunningTime="2026-03-14 05:35:03.846026324 +0000 UTC m=+157.884287070" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.909393 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" podStartSLOduration=93.909377669 podStartE2EDuration="1m33.909377669s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.908690849 +0000 UTC m=+157.946951585" watchObservedRunningTime="2026-03-14 05:35:03.909377669 +0000 UTC m=+157.947638415" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.909656 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zc2cb" podStartSLOduration=7.9096513470000005 podStartE2EDuration="7.909651347s" podCreationTimestamp="2026-03-14 05:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.886289291 +0000 UTC m=+157.924550037" watchObservedRunningTime="2026-03-14 05:35:03.909651347 +0000 UTC m=+157.947912093" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.918396 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:03 crc kubenswrapper[4817]: E0314 05:35:03.918755 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.41873073 +0000 UTC m=+158.456991476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.953023 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lvcvt" podStartSLOduration=93.953004653 podStartE2EDuration="1m33.953004653s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.95048256 +0000 UTC m=+157.988743306" watchObservedRunningTime="2026-03-14 05:35:03.953004653 +0000 UTC m=+157.991265399" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.979273 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tgp8z" podStartSLOduration=93.979257623 podStartE2EDuration="1m33.979257623s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:03.978269255 +0000 UTC m=+158.016530001" watchObservedRunningTime="2026-03-14 05:35:03.979257623 +0000 UTC m=+158.017518369" Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.997240 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:03 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:03 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:03 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:03 crc kubenswrapper[4817]: I0314 05:35:03.997292 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.016073 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5rmwm" podStartSLOduration=94.016049699 podStartE2EDuration="1m34.016049699s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.011840577 +0000 UTC m=+158.050101323" watchObservedRunningTime="2026-03-14 05:35:04.016049699 +0000 UTC m=+158.054310445" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.019687 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.020059 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.520045105 +0000 UTC m=+158.558305851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.121494 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.121849 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.621834073 +0000 UTC m=+158.660094819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.125450 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" podStartSLOduration=94.125425377 podStartE2EDuration="1m34.125425377s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.090878057 +0000 UTC m=+158.129138813" watchObservedRunningTime="2026-03-14 05:35:04.125425377 +0000 UTC m=+158.163686123" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.126749 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" podStartSLOduration=94.126742725 podStartE2EDuration="1m34.126742725s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.123206993 +0000 UTC m=+158.161467739" watchObservedRunningTime="2026-03-14 05:35:04.126742725 +0000 UTC m=+158.165003471" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.156194 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" podStartSLOduration=94.156150217 podStartE2EDuration="1m34.156150217s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.143426619 +0000 UTC m=+158.181687365" watchObservedRunningTime="2026-03-14 05:35:04.156150217 +0000 UTC m=+158.194410963" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.180578 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4khbc" podStartSLOduration=94.180560874 podStartE2EDuration="1m34.180560874s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.166384564 +0000 UTC m=+158.204645310" watchObservedRunningTime="2026-03-14 05:35:04.180560874 +0000 UTC m=+158.218821620" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.198756 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jnxpm" podStartSLOduration=94.198739041 podStartE2EDuration="1m34.198739041s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.196663871 +0000 UTC m=+158.234924637" watchObservedRunningTime="2026-03-14 05:35:04.198739041 +0000 UTC m=+158.236999787" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.225427 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-v5nhk" podStartSLOduration=94.225406303 podStartE2EDuration="1m34.225406303s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.222226431 +0000 UTC m=+158.260487177" watchObservedRunningTime="2026-03-14 05:35:04.225406303 +0000 UTC m=+158.263667049" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.226859 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.227197 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.727186195 +0000 UTC m=+158.765446941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.330695 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.330919 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.830874798 +0000 UTC m=+158.869135544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.331015 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.331533 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.831510507 +0000 UTC m=+158.869771243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.432013 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.432543 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:04.932517162 +0000 UTC m=+158.970777908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.536599 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.537163 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.037145433 +0000 UTC m=+159.075406179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.555739 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pk9ws"] Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.637703 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.638004 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.137961703 +0000 UTC m=+159.176222449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.638085 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.638810 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.138803118 +0000 UTC m=+159.177063864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.663629 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8hzr5" event={"ID":"61083608-8b3b-4c98-a236-0d3f1d26d3b5","Type":"ContainerStarted","Data":"7f55b61bfbd4493721191a0924afe1a5e964a7ef7aeef95150f661717ac423ad"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.683206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" event={"ID":"9f2c60cc-433e-4f50-8eca-2fb0ddd7982d","Type":"ContainerStarted","Data":"e562d88f7f09ace77adfa5cd04ecac10783fef52750534f7ee86aa11cc774457"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.698815 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bq8dk" event={"ID":"db1d9ab3-f304-42b2-9535-e41c50ce108c","Type":"ContainerStarted","Data":"6c5070548ef8f847079f3f010ce6467eea343137ca2f96750433e196fbff9957"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.716246 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" event={"ID":"1661ef8f-2020-4c65-8228-434786300314","Type":"ContainerStarted","Data":"922d15fd3d870eaa4a84f065d2f79f3bd1555f5b42931d15f460b75368b4c97f"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.722808 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ssxwb" event={"ID":"41cbacfd-0181-45a3-86c2-8a51f672a37d","Type":"ContainerStarted","Data":"0eea5a0f6c9e47a53c98fc0db7bbb19a30da9af4f014e1e2badb4b29ccec0926"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.741493 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.742126 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.24209224 +0000 UTC m=+159.280353006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.749339 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" event={"ID":"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a","Type":"ContainerStarted","Data":"6d1ff6bddc1701f31c7fd0a78d5f33f6b3e0d06b5d7c5d1994ef81c14748c0f8"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.754973 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7zj7n" podStartSLOduration=94.754957762 podStartE2EDuration="1m34.754957762s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.741540894 +0000 UTC m=+158.779801650" watchObservedRunningTime="2026-03-14 05:35:04.754957762 +0000 UTC m=+158.793218508" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.759391 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" event={"ID":"fbf65941-df6f-4ba2-970a-b1346853d39b","Type":"ContainerStarted","Data":"91a88a18f688b489fc79eb683dd5228482396fc2fb6027f3aad701ff7fc5d6f1"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.774491 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerName="controller-manager" containerID="cri-o://63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44" gracePeriod=30 Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.775537 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zc2cb" event={"ID":"f686c00d-9b3f-4ad0-a44a-2a27218f9d3c","Type":"ContainerStarted","Data":"433248a4e546afdf6671f0b325d2dfd6399e482b7260bea29336f8d489ca54a2"} Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.777186 4817 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5cqlk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.777295 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.777982 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" podUID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" containerName="route-controller-manager" containerID="cri-o://f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019" gracePeriod=30 Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.787869 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.794369 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gmsz9" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.800012 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f6b68" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.830572 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4ztcp" podStartSLOduration=94.830547612 podStartE2EDuration="1m34.830547612s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:04.827399701 +0000 UTC m=+158.865660477" watchObservedRunningTime="2026-03-14 05:35:04.830547612 +0000 UTC m=+158.868808358" Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.847833 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.855787 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.355768431 +0000 UTC m=+159.394029167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.951232 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:04 crc kubenswrapper[4817]: E0314 05:35:04.952074 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.45205146 +0000 UTC m=+159.490312206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:04 crc kubenswrapper[4817]: I0314 05:35:04.995368 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58520: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.000478 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:05 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:05 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:05 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.000537 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.054137 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.054577 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.55456534 +0000 UTC m=+159.592826086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.157257 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.157614 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.657597964 +0000 UTC m=+159.695858700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.161098 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58534: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.245059 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58542: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.260212 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.265267 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.765250542 +0000 UTC m=+159.803511288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.366604 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.367821 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.867805803 +0000 UTC m=+159.906066549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.369198 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58556: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.470343 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.470936 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:05.97091795 +0000 UTC m=+160.009178696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.475802 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.484595 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.496835 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58560: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.526940 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt"] Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.527225 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" containerName="route-controller-manager" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.527246 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" containerName="route-controller-manager" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.527261 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerName="controller-manager" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.527268 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerName="controller-manager" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.527398 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerName="controller-manager" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.527411 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" containerName="route-controller-manager" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.527938 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.554386 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt"] Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572437 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtvsd\" (UniqueName: \"kubernetes.io/projected/3d625077-3908-4a2f-a87e-5631a1a2a450-kube-api-access-mtvsd\") pod \"3d625077-3908-4a2f-a87e-5631a1a2a450\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572490 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-serving-cert\") pod \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572521 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-proxy-ca-bundles\") pod \"3d625077-3908-4a2f-a87e-5631a1a2a450\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572556 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-config\") pod \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572585 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-config\") pod \"3d625077-3908-4a2f-a87e-5631a1a2a450\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572610 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-client-ca\") pod \"3d625077-3908-4a2f-a87e-5631a1a2a450\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572664 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvq4\" (UniqueName: \"kubernetes.io/projected/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-kube-api-access-6pvq4\") pod \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572690 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-client-ca\") pod \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\" (UID: \"30a068b8-6cf5-4f60-9a4b-89c40b37ad32\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572774 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d625077-3908-4a2f-a87e-5631a1a2a450-serving-cert\") pod \"3d625077-3908-4a2f-a87e-5631a1a2a450\" (UID: \"3d625077-3908-4a2f-a87e-5631a1a2a450\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.572948 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.573150 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.07312965 +0000 UTC m=+160.111390396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.573735 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-config\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.574634 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d625077-3908-4a2f-a87e-5631a1a2a450" (UID: "3d625077-3908-4a2f-a87e-5631a1a2a450"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.575073 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-config" (OuterVolumeSpecName: "config") pod "30a068b8-6cf5-4f60-9a4b-89c40b37ad32" (UID: "30a068b8-6cf5-4f60-9a4b-89c40b37ad32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.575099 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-config" (OuterVolumeSpecName: "config") pod "3d625077-3908-4a2f-a87e-5631a1a2a450" (UID: "3d625077-3908-4a2f-a87e-5631a1a2a450"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.575386 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3d625077-3908-4a2f-a87e-5631a1a2a450" (UID: "3d625077-3908-4a2f-a87e-5631a1a2a450"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.575603 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-client-ca" (OuterVolumeSpecName: "client-ca") pod "30a068b8-6cf5-4f60-9a4b-89c40b37ad32" (UID: "30a068b8-6cf5-4f60-9a4b-89c40b37ad32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.583487 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30a068b8-6cf5-4f60-9a4b-89c40b37ad32" (UID: "30a068b8-6cf5-4f60-9a4b-89c40b37ad32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.591426 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-client-ca\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.591620 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.591755 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfbn\" (UniqueName: \"kubernetes.io/projected/f9a95cae-c3ff-487c-a26a-ba4a32363ace-kube-api-access-8hfbn\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.592121 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a95cae-c3ff-487c-a26a-ba4a32363ace-serving-cert\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.592435 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.593096 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.093078408 +0000 UTC m=+160.131339154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.594112 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d625077-3908-4a2f-a87e-5631a1a2a450-kube-api-access-mtvsd" (OuterVolumeSpecName: "kube-api-access-mtvsd") pod "3d625077-3908-4a2f-a87e-5631a1a2a450" (UID: "3d625077-3908-4a2f-a87e-5631a1a2a450"). InnerVolumeSpecName "kube-api-access-mtvsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.594585 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d625077-3908-4a2f-a87e-5631a1a2a450-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d625077-3908-4a2f-a87e-5631a1a2a450" (UID: "3d625077-3908-4a2f-a87e-5631a1a2a450"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.595616 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.595727 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.595815 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.595936 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d625077-3908-4a2f-a87e-5631a1a2a450-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.596041 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.599526 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-kube-api-access-6pvq4" (OuterVolumeSpecName: "kube-api-access-6pvq4") pod "30a068b8-6cf5-4f60-9a4b-89c40b37ad32" (UID: "30a068b8-6cf5-4f60-9a4b-89c40b37ad32"). InnerVolumeSpecName "kube-api-access-6pvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.644089 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58576: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.697742 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698162 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-config\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698198 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-client-ca\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698235 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hfbn\" (UniqueName: \"kubernetes.io/projected/f9a95cae-c3ff-487c-a26a-ba4a32363ace-kube-api-access-8hfbn\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698282 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a95cae-c3ff-487c-a26a-ba4a32363ace-serving-cert\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698335 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d625077-3908-4a2f-a87e-5631a1a2a450-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698347 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtvsd\" (UniqueName: \"kubernetes.io/projected/3d625077-3908-4a2f-a87e-5631a1a2a450-kube-api-access-mtvsd\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.698357 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvq4\" (UniqueName: \"kubernetes.io/projected/30a068b8-6cf5-4f60-9a4b-89c40b37ad32-kube-api-access-6pvq4\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.700281 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-client-ca\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.700830 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.200815579 +0000 UTC m=+160.239076325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.701043 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-config\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.703964 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a95cae-c3ff-487c-a26a-ba4a32363ace-serving-cert\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.727915 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hfbn\" (UniqueName: \"kubernetes.io/projected/f9a95cae-c3ff-487c-a26a-ba4a32363ace-kube-api-access-8hfbn\") pod \"route-controller-manager-7796bb49d4-rbmxt\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.776773 4817 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5mbx5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.776860 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.792448 4817 generic.go:334] "Generic (PLEG): container finished" podID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" containerID="f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019" exitCode=0 Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.792508 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" event={"ID":"30a068b8-6cf5-4f60-9a4b-89c40b37ad32","Type":"ContainerDied","Data":"f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019"} Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.792535 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" event={"ID":"30a068b8-6cf5-4f60-9a4b-89c40b37ad32","Type":"ContainerDied","Data":"72bd7e80f1abd14e784322313489678227d4087b615c81c824418703e4aa3f59"} Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.792554 4817 scope.go:117] "RemoveContainer" containerID="f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.792656 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.799716 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.800210 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.300174807 +0000 UTC m=+160.338435553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.806765 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" event={"ID":"4fc456eb-fbb1-42c7-88fa-f7a3e13a397a","Type":"ContainerStarted","Data":"5a45b9da61b4ecba0a053ff6160b3a2c897659c882e619b19026b47ce8f170c2"} Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.829115 4817 generic.go:334] "Generic (PLEG): container finished" podID="3d625077-3908-4a2f-a87e-5631a1a2a450" containerID="63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44" exitCode=0 Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.829782 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j"] Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830061 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" event={"ID":"3d625077-3908-4a2f-a87e-5631a1a2a450","Type":"ContainerDied","Data":"63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44"} Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830101 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" event={"ID":"3d625077-3908-4a2f-a87e-5631a1a2a450","Type":"ContainerDied","Data":"f2115555f97cf112f910e5dbcbe1dff3f6712cbd60bb0ae6fad13789fc55db21"} Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830203 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-86l7w" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830280 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jsk5j"] Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830354 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" gracePeriod=30 Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830566 4817 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5cqlk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.830595 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.834045 4817 scope.go:117] "RemoveContainer" containerID="f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.836102 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019\": container with ID starting with f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019 not found: ID does not exist" containerID="f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.836146 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019"} err="failed to get container status \"f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019\": rpc error: code = NotFound desc = could not find container \"f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019\": container with ID starting with f4b1a606a331209880d9595b2c7fe00cc8c53e83b0c02e0139157f59ce677019 not found: ID does not exist" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.836171 4817 scope.go:117] "RemoveContainer" containerID="63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.850274 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.850332 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.858347 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58582: no serving certificate available for the kubelet" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.882504 4817 scope.go:117] "RemoveContainer" containerID="63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.883402 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44\": container with ID starting with 63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44 not found: ID does not exist" containerID="63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.883436 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44"} err="failed to get container status \"63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44\": rpc error: code = NotFound desc = could not find container \"63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44\": container with ID starting with 63b225afcdf351f084f1c89a13ccaced058ad7b205799693526967dd2e092e44 not found: ID does not exist" Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.904623 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.906546 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" podStartSLOduration=95.906526527 podStartE2EDuration="1m35.906526527s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:05.87245502 +0000 UTC m=+159.910715766" watchObservedRunningTime="2026-03-14 05:35:05.906526527 +0000 UTC m=+159.944787263" Mar 14 05:35:05 crc kubenswrapper[4817]: E0314 05:35:05.907123 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.407098884 +0000 UTC m=+160.445359630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.947119 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86l7w"] Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.950372 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-86l7w"] Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.997670 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:05 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:05 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:05 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:05 crc kubenswrapper[4817]: I0314 05:35:05.997727 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.007371 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.007883 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.507864623 +0000 UTC m=+160.546125369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.109539 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.109929 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.609913849 +0000 UTC m=+160.648174585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.111813 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mxvkz"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.112655 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.116269 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.168660 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxvkz"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.216246 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-utilities\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.216302 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.216323 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjcn\" (UniqueName: \"kubernetes.io/projected/3bf969ab-d18a-43ef-88be-3e1337f14b4d-kube-api-access-rxjcn\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.216375 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-catalog-content\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.216659 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.71664719 +0000 UTC m=+160.754907936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.260322 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58588: no serving certificate available for the kubelet" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.297312 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hbjqr"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.306998 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.314524 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbjqr"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.317436 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.317571 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-utilities\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.317617 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjcn\" (UniqueName: \"kubernetes.io/projected/3bf969ab-d18a-43ef-88be-3e1337f14b4d-kube-api-access-rxjcn\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.317694 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-catalog-content\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.318934 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.818885272 +0000 UTC m=+160.857146028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.320775 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-catalog-content\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.321330 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.327055 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-utilities\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.358723 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjcn\" (UniqueName: \"kubernetes.io/projected/3bf969ab-d18a-43ef-88be-3e1337f14b4d-kube-api-access-rxjcn\") pod \"certified-operators-mxvkz\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.418611 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrgl\" (UniqueName: \"kubernetes.io/projected/c132937c-20aa-47d7-903b-92a9ec65ba6f-kube-api-access-hlrgl\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.418686 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-catalog-content\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.418709 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-utilities\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.418740 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.419040 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:06.919027612 +0000 UTC m=+160.957288358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.435958 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.461215 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:35:06 crc kubenswrapper[4817]: W0314 05:35:06.467651 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a95cae_c3ff_487c_a26a_ba4a32363ace.slice/crio-5b90e78406856202203d6115e69592d56214cac977c74569f58389d764fc0dba WatchSource:0}: Error finding container 5b90e78406856202203d6115e69592d56214cac977c74569f58389d764fc0dba: Status 404 returned error can't find the container with id 5b90e78406856202203d6115e69592d56214cac977c74569f58389d764fc0dba Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.503929 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9p69t"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.504862 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.520358 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.520501 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.020476911 +0000 UTC m=+161.058737657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.520557 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-catalog-content\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.521403 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-utilities\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.521485 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.521881 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-catalog-content\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.521972 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-utilities\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.522068 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.022047306 +0000 UTC m=+161.060308052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.522268 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlrgl\" (UniqueName: \"kubernetes.io/projected/c132937c-20aa-47d7-903b-92a9ec65ba6f-kube-api-access-hlrgl\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.560663 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlrgl\" (UniqueName: \"kubernetes.io/projected/c132937c-20aa-47d7-903b-92a9ec65ba6f-kube-api-access-hlrgl\") pod \"community-operators-hbjqr\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.611574 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9p69t"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.623360 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.623698 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vc68\" (UniqueName: \"kubernetes.io/projected/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-kube-api-access-2vc68\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.623740 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-utilities\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.623767 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.123741162 +0000 UTC m=+161.162001908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.623816 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-catalog-content\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.648291 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.725646 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vc68\" (UniqueName: \"kubernetes.io/projected/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-kube-api-access-2vc68\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.725995 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-utilities\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.726034 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-catalog-content\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.726080 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.726364 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.226351074 +0000 UTC m=+161.264611820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.727157 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-utilities\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.727385 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-catalog-content\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.731583 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6v7h"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.735797 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.748684 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a068b8-6cf5-4f60-9a4b-89c40b37ad32" path="/var/lib/kubelet/pods/30a068b8-6cf5-4f60-9a4b-89c40b37ad32/volumes" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.749383 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d625077-3908-4a2f-a87e-5631a1a2a450" path="/var/lib/kubelet/pods/3d625077-3908-4a2f-a87e-5631a1a2a450/volumes" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.749810 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6v7h"] Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.762812 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vc68\" (UniqueName: \"kubernetes.io/projected/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-kube-api-access-2vc68\") pod \"certified-operators-9p69t\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.772854 4817 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.826682 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.826990 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.326961848 +0000 UTC m=+161.365222594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.828367 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhv4\" (UniqueName: \"kubernetes.io/projected/213ab431-bf6b-41ea-8117-d3406d2b654a-kube-api-access-sdhv4\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.828399 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-catalog-content\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.828428 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-utilities\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.828547 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.828825 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.328814492 +0000 UTC m=+161.367075238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.849041 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.867870 4817 generic.go:334] "Generic (PLEG): container finished" podID="cdb37610-64a2-43d9-98b1-513a60b6de4d" containerID="5a8f548a937ceb9117b83d0d4cf8205c0bbaedacf7c23ae3253e1ef339153343" exitCode=0 Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.867940 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" event={"ID":"cdb37610-64a2-43d9-98b1-513a60b6de4d","Type":"ContainerDied","Data":"5a8f548a937ceb9117b83d0d4cf8205c0bbaedacf7c23ae3253e1ef339153343"} Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.915364 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" event={"ID":"f9a95cae-c3ff-487c-a26a-ba4a32363ace","Type":"ContainerStarted","Data":"775ddda9e523a6b7f667de602956d605ad162e117a6539e1ee8f786f073f3328"} Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.915413 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" event={"ID":"f9a95cae-c3ff-487c-a26a-ba4a32363ace","Type":"ContainerStarted","Data":"5b90e78406856202203d6115e69592d56214cac977c74569f58389d764fc0dba"} Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.915917 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.930123 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.930729 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhv4\" (UniqueName: \"kubernetes.io/projected/213ab431-bf6b-41ea-8117-d3406d2b654a-kube-api-access-sdhv4\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.930762 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-catalog-content\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: E0314 05:35:06.932466 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.432432203 +0000 UTC m=+161.470692949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.930800 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-utilities\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.932851 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-catalog-content\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.933087 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-utilities\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.953448 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" podStartSLOduration=3.953430972 podStartE2EDuration="3.953430972s" podCreationTimestamp="2026-03-14 05:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:06.951466005 +0000 UTC m=+160.989726761" watchObservedRunningTime="2026-03-14 05:35:06.953430972 +0000 UTC m=+160.991691718" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.959927 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" event={"ID":"1661ef8f-2020-4c65-8228-434786300314","Type":"ContainerStarted","Data":"a29d490b726ea4afcacbe20bf410a98f5887a166e2e2a8112581a1abc98852d9"} Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.960164 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhv4\" (UniqueName: \"kubernetes.io/projected/213ab431-bf6b-41ea-8117-d3406d2b654a-kube-api-access-sdhv4\") pod \"community-operators-h6v7h\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.974520 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58598: no serving certificate available for the kubelet" Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.997432 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:06 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:06 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:06 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:06 crc kubenswrapper[4817]: I0314 05:35:06.997478 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.034370 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:07 crc kubenswrapper[4817]: E0314 05:35:07.035545 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.53553021 +0000 UTC m=+161.573790956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lcsh2" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.100491 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.104825 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbjqr"] Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.135057 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:07 crc kubenswrapper[4817]: E0314 05:35:07.135877 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 05:35:07.635860606 +0000 UTC m=+161.674121352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.164467 4817 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-14T05:35:06.772880052Z","Handler":null,"Name":""} Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.169925 4817 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.169954 4817 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.234637 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxvkz"] Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.236629 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:07 crc kubenswrapper[4817]: W0314 05:35:07.250028 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf969ab_d18a_43ef_88be_3e1337f14b4d.slice/crio-36c5a516f3696346cf024872adf034ec268ab39db114312ddc91c4c9a18c5a79 WatchSource:0}: Error finding container 36c5a516f3696346cf024872adf034ec268ab39db114312ddc91c4c9a18c5a79: Status 404 returned error can't find the container with id 36c5a516f3696346cf024872adf034ec268ab39db114312ddc91c4c9a18c5a79 Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.254301 4817 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.254363 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.283255 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lcsh2\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.337715 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.348636 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.369648 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.389067 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9p69t"] Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.462389 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6v7h"] Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.519226 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.540763 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:35:07 crc kubenswrapper[4817]: E0314 05:35:07.556024 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf969ab_d18a_43ef_88be_3e1337f14b4d.slice/crio-46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132.scope\": RecentStats: unable to find data in memory cache]" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.567981 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.638878 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lcsh2"] Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.643349 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.643435 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.643466 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.646664 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.652512 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.661601 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.810009 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wbttc" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.847169 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.858384 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.869301 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.989230 4817 generic.go:334] "Generic (PLEG): container finished" podID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerID="1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a" exitCode=0 Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.989526 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p69t" event={"ID":"2c5f53ee-2afc-4fe8-a17c-10c9808edac2","Type":"ContainerDied","Data":"1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a"} Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.989566 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p69t" event={"ID":"2c5f53ee-2afc-4fe8-a17c-10c9808edac2","Type":"ContainerStarted","Data":"2d4e9184854d8f85636475d13a39cf5c0a8a5b92085cf5bfe9c48f0dfd33f305"} Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.994959 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:07 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:07 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:07 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.995007 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.999854 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" event={"ID":"1661ef8f-2020-4c65-8228-434786300314","Type":"ContainerStarted","Data":"f1a665df2906ef1a976996aa862344cc4baab81ca70820634bbad6e9861fb7ff"} Mar 14 05:35:07 crc kubenswrapper[4817]: I0314 05:35:07.999903 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" event={"ID":"1661ef8f-2020-4c65-8228-434786300314","Type":"ContainerStarted","Data":"b60ac62f7c445887027db4288eb4af82c129e28b165efa3ee903d0339700eac1"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.041307 4817 generic.go:334] "Generic (PLEG): container finished" podID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerID="2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1" exitCode=0 Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.041390 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbjqr" event={"ID":"c132937c-20aa-47d7-903b-92a9ec65ba6f","Type":"ContainerDied","Data":"2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.041416 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbjqr" event={"ID":"c132937c-20aa-47d7-903b-92a9ec65ba6f","Type":"ContainerStarted","Data":"2a4736c3188ba57082a0fb1c2ec253b8ea78164f11e869a73fb86b005c99e51d"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.047060 4817 generic.go:334] "Generic (PLEG): container finished" podID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerID="46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132" exitCode=0 Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.047123 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvkz" event={"ID":"3bf969ab-d18a-43ef-88be-3e1337f14b4d","Type":"ContainerDied","Data":"46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.047150 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvkz" event={"ID":"3bf969ab-d18a-43ef-88be-3e1337f14b4d","Type":"ContainerStarted","Data":"36c5a516f3696346cf024872adf034ec268ab39db114312ddc91c4c9a18c5a79"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.060141 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" event={"ID":"e1b60a8f-12a6-4129-9b96-2b69e788111b","Type":"ContainerStarted","Data":"1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.060217 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" event={"ID":"e1b60a8f-12a6-4129-9b96-2b69e788111b","Type":"ContainerStarted","Data":"898d7f0e8195b06c4864c569271f5ad8825e72250a0a0558c29460bb12e079f1"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.060287 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.062507 4817 generic.go:334] "Generic (PLEG): container finished" podID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerID="2e6310ad1d6db96cd44f8266baa01f0fd12796dce3973687bf7198745d26b873" exitCode=0 Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.063065 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerDied","Data":"2e6310ad1d6db96cd44f8266baa01f0fd12796dce3973687bf7198745d26b873"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.063132 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerStarted","Data":"35b0f57d003653ed6ecd559f52765aa8d33592760a9cf3ce53e933b36286cc37"} Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.082568 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6mx7t" podStartSLOduration=12.082533767 podStartE2EDuration="12.082533767s" podCreationTimestamp="2026-03-14 05:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:08.05399257 +0000 UTC m=+162.092253336" watchObservedRunningTime="2026-03-14 05:35:08.082533767 +0000 UTC m=+162.120794513" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.155251 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.156129 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" podStartSLOduration=98.156109198 podStartE2EDuration="1m38.156109198s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:08.14790044 +0000 UTC m=+162.186161196" watchObservedRunningTime="2026-03-14 05:35:08.156109198 +0000 UTC m=+162.194369934" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.156205 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.162261 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.162484 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.169776 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 05:35:08 crc kubenswrapper[4817]: W0314 05:35:08.192040 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d04685ff3d5debf49e4c8762a2889bda9e949aa1dd5f80fc92cae4b74babd46d WatchSource:0}: Error finding container d04685ff3d5debf49e4c8762a2889bda9e949aa1dd5f80fc92cae4b74babd46d: Status 404 returned error can't find the container with id d04685ff3d5debf49e4c8762a2889bda9e949aa1dd5f80fc92cae4b74babd46d Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.205801 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.210420 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.213322 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.214276 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.214493 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.214990 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.215392 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.215526 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 05:35:08 crc kubenswrapper[4817]: W0314 05:35:08.218033 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-52a0b07662aa2c7aca727f49268759b82dc020ae2a697ca7bbbf77b5798d5a65 WatchSource:0}: Error finding container 52a0b07662aa2c7aca727f49268759b82dc020ae2a697ca7bbbf77b5798d5a65: Status 404 returned error can't find the container with id 52a0b07662aa2c7aca727f49268759b82dc020ae2a697ca7bbbf77b5798d5a65 Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.218536 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.220804 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.266795 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d80934b-65c8-4f63-a064-a4273672ceb7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.267257 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lhv\" (UniqueName: \"kubernetes.io/projected/354d0464-c5a5-483d-ad85-4961ce201392-kube-api-access-r4lhv\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.267293 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/354d0464-c5a5-483d-ad85-4961ce201392-serving-cert\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.267317 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d80934b-65c8-4f63-a064-a4273672ceb7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.267357 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-proxy-ca-bundles\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.267395 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-config\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.267418 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-client-ca\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.289516 4817 ???:1] "http: TLS handshake error from 192.168.126.11:58606: no serving certificate available for the kubelet" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.298224 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n45mx"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.304238 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.307028 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.316666 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n45mx"] Mar 14 05:35:08 crc kubenswrapper[4817]: W0314 05:35:08.357886 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9df7c333bbe0cefbeaba9c8f083e937b48df03da4ff67e067820edcc79b85c55 WatchSource:0}: Error finding container 9df7c333bbe0cefbeaba9c8f083e937b48df03da4ff67e067820edcc79b85c55: Status 404 returned error can't find the container with id 9df7c333bbe0cefbeaba9c8f083e937b48df03da4ff67e067820edcc79b85c55 Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.368864 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d80934b-65c8-4f63-a064-a4273672ceb7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.368952 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxws\" (UniqueName: \"kubernetes.io/projected/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-kube-api-access-9jxws\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.368989 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lhv\" (UniqueName: \"kubernetes.io/projected/354d0464-c5a5-483d-ad85-4961ce201392-kube-api-access-r4lhv\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369027 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/354d0464-c5a5-483d-ad85-4961ce201392-serving-cert\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369082 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d80934b-65c8-4f63-a064-a4273672ceb7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369138 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-proxy-ca-bundles\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369185 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-utilities\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369225 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-config\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369390 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-client-ca\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.369435 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-catalog-content\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.370485 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-proxy-ca-bundles\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.370614 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d80934b-65c8-4f63-a064-a4273672ceb7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.370755 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-config\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.371034 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-client-ca\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.375816 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/354d0464-c5a5-483d-ad85-4961ce201392-serving-cert\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.381470 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.386029 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d80934b-65c8-4f63-a064-a4273672ceb7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.387564 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lhv\" (UniqueName: \"kubernetes.io/projected/354d0464-c5a5-483d-ad85-4961ce201392-kube-api-access-r4lhv\") pod \"controller-manager-f8cd57fdc-9kt5h\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.469982 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cjhd\" (UniqueName: \"kubernetes.io/projected/cdb37610-64a2-43d9-98b1-513a60b6de4d-kube-api-access-8cjhd\") pod \"cdb37610-64a2-43d9-98b1-513a60b6de4d\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.470041 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb37610-64a2-43d9-98b1-513a60b6de4d-secret-volume\") pod \"cdb37610-64a2-43d9-98b1-513a60b6de4d\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.470099 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb37610-64a2-43d9-98b1-513a60b6de4d-config-volume\") pod \"cdb37610-64a2-43d9-98b1-513a60b6de4d\" (UID: \"cdb37610-64a2-43d9-98b1-513a60b6de4d\") " Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.470266 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-utilities\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.470307 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-catalog-content\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.470341 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jxws\" (UniqueName: \"kubernetes.io/projected/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-kube-api-access-9jxws\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.471137 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-utilities\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.471410 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-catalog-content\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.471481 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb37610-64a2-43d9-98b1-513a60b6de4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "cdb37610-64a2-43d9-98b1-513a60b6de4d" (UID: "cdb37610-64a2-43d9-98b1-513a60b6de4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.475927 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb37610-64a2-43d9-98b1-513a60b6de4d-kube-api-access-8cjhd" (OuterVolumeSpecName: "kube-api-access-8cjhd") pod "cdb37610-64a2-43d9-98b1-513a60b6de4d" (UID: "cdb37610-64a2-43d9-98b1-513a60b6de4d"). InnerVolumeSpecName "kube-api-access-8cjhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.475977 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb37610-64a2-43d9-98b1-513a60b6de4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cdb37610-64a2-43d9-98b1-513a60b6de4d" (UID: "cdb37610-64a2-43d9-98b1-513a60b6de4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.486021 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.489344 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jxws\" (UniqueName: \"kubernetes.io/projected/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-kube-api-access-9jxws\") pod \"redhat-marketplace-n45mx\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.492795 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpr5b" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.520238 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.551808 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.571780 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb37610-64a2-43d9-98b1-513a60b6de4d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.571810 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cjhd\" (UniqueName: \"kubernetes.io/projected/cdb37610-64a2-43d9-98b1-513a60b6de4d-kube-api-access-8cjhd\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.571822 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdb37610-64a2-43d9-98b1-513a60b6de4d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.590618 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jx7hb" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.599866 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.599923 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.616159 4817 patch_prober.go:28] interesting pod/console-f9d7485db-jnxpm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.616231 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jnxpm" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.625410 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.641363 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 05:35:08 crc kubenswrapper[4817]: E0314 05:35:08.641767 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb37610-64a2-43d9-98b1-513a60b6de4d" containerName="collect-profiles" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.641791 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb37610-64a2-43d9-98b1-513a60b6de4d" containerName="collect-profiles" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.642074 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb37610-64a2-43d9-98b1-513a60b6de4d" containerName="collect-profiles" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.642557 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.651791 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.652375 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.654649 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.659331 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lg8lf" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.726542 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-24f8l"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.727915 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.760539 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.762030 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24f8l"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.762062 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.762079 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.769549 4817 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rt8qf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]log ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]etcd ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/generic-apiserver-start-informers ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/max-in-flight-filter ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 14 05:35:08 crc kubenswrapper[4817]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 14 05:35:08 crc kubenswrapper[4817]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/project.openshift.io-projectcache ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/openshift.io-startinformers ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 14 05:35:08 crc kubenswrapper[4817]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 14 05:35:08 crc kubenswrapper[4817]: livez check failed Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.769598 4817 patch_prober.go:28] interesting pod/downloads-7954f5f757-gcsxl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.769616 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" podUID="4fc456eb-fbb1-42c7-88fa-f7a3e13a397a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.769643 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gcsxl" podUID="3b721585-56d3-4382-b93d-c70296e6d223" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.769809 4817 patch_prober.go:28] interesting pod/downloads-7954f5f757-gcsxl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.769856 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gcsxl" podUID="3b721585-56d3-4382-b93d-c70296e6d223" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.774575 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a0130a-327e-4012-9c36-3a5de6906b9e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.774613 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfdp\" (UniqueName: \"kubernetes.io/projected/655a63e0-d806-4b09-a33f-aef9c8c58b54-kube-api-access-mwfdp\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.774634 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a0130a-327e-4012-9c36-3a5de6906b9e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.774672 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-catalog-content\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.774691 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-utilities\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.876032 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-catalog-content\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.876421 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-utilities\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.876530 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-catalog-content\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.878856 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-utilities\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.879207 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a0130a-327e-4012-9c36-3a5de6906b9e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.879250 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwfdp\" (UniqueName: \"kubernetes.io/projected/655a63e0-d806-4b09-a33f-aef9c8c58b54-kube-api-access-mwfdp\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.879271 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a0130a-327e-4012-9c36-3a5de6906b9e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.880052 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a0130a-327e-4012-9c36-3a5de6906b9e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.913624 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwfdp\" (UniqueName: \"kubernetes.io/projected/655a63e0-d806-4b09-a33f-aef9c8c58b54-kube-api-access-mwfdp\") pod \"redhat-marketplace-24f8l\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.915223 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a0130a-327e-4012-9c36-3a5de6906b9e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.957054 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h"] Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.965919 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.984913 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.991669 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.997016 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:08 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:08 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:08 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:08 crc kubenswrapper[4817]: I0314 05:35:08.997048 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.071475 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.117290 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"49a74aa1231f342e663e84fde0328b64abe8d2c4137b9d30606e266d8d687aa0"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.117349 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d04685ff3d5debf49e4c8762a2889bda9e949aa1dd5f80fc92cae4b74babd46d"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.118940 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.140140 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.162576 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" event={"ID":"354d0464-c5a5-483d-ad85-4961ce201392","Type":"ContainerStarted","Data":"02d50e5c1f1720a22dd2981bdac31c7a0fc391f66e61ad7340a434939a021e04"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.169796 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70d250b284fdce5c5c513dcbaae71d41288297f94808a2a6a018781e974fe75d"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.169848 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9df7c333bbe0cefbeaba9c8f083e937b48df03da4ff67e067820edcc79b85c55"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.209109 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" event={"ID":"cdb37610-64a2-43d9-98b1-513a60b6de4d","Type":"ContainerDied","Data":"39bee6d4b7f6587f967af9c30c936d1633a598a19707fc35a4c7d5fe81bd64d2"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.209144 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39bee6d4b7f6587f967af9c30c936d1633a598a19707fc35a4c7d5fe81bd64d2" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.209244 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.232585 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"328388bd7adecfe5371e1f5bfac0ee91deefa42e3dcbd6a8619c509811da20a8"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.232629 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"52a0b07662aa2c7aca727f49268759b82dc020ae2a697ca7bbbf77b5798d5a65"} Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.289588 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n45mx"] Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.318459 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nwvm"] Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.328358 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.339498 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.340025 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.342623 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nwvm"] Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.391505 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfwll\" (UniqueName: \"kubernetes.io/projected/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-kube-api-access-zfwll\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.391579 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-catalog-content\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.391627 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-utilities\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: E0314 05:35:09.433156 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:09 crc kubenswrapper[4817]: E0314 05:35:09.471044 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:09 crc kubenswrapper[4817]: E0314 05:35:09.486074 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:09 crc kubenswrapper[4817]: E0314 05:35:09.486149 4817 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.493075 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-utilities\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.493171 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfwll\" (UniqueName: \"kubernetes.io/projected/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-kube-api-access-zfwll\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.493214 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-catalog-content\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.493761 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-catalog-content\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.493766 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-utilities\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.524581 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfwll\" (UniqueName: \"kubernetes.io/projected/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-kube-api-access-zfwll\") pod \"redhat-operators-9nwvm\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.539561 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 05:35:09 crc kubenswrapper[4817]: W0314 05:35:09.555483 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod01a0130a_327e_4012_9c36_3a5de6906b9e.slice/crio-d51e8b900462b99acc149dc940c4ed29e623843dc239f2556d869661f08eefcb WatchSource:0}: Error finding container d51e8b900462b99acc149dc940c4ed29e623843dc239f2556d869661f08eefcb: Status 404 returned error can't find the container with id d51e8b900462b99acc149dc940c4ed29e623843dc239f2556d869661f08eefcb Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.694525 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24f8l"] Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.699728 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cs45w"] Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.704292 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.706534 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.716422 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cs45w"] Mar 14 05:35:09 crc kubenswrapper[4817]: W0314 05:35:09.793069 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod655a63e0_d806_4b09_a33f_aef9c8c58b54.slice/crio-6fd8b765096597d6b4f83a5751db41b82cd0f4ee5b532a8f573f96eced12e110 WatchSource:0}: Error finding container 6fd8b765096597d6b4f83a5751db41b82cd0f4ee5b532a8f573f96eced12e110: Status 404 returned error can't find the container with id 6fd8b765096597d6b4f83a5751db41b82cd0f4ee5b532a8f573f96eced12e110 Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.799311 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-catalog-content\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.799399 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-utilities\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.799437 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4fjd\" (UniqueName: \"kubernetes.io/projected/d94326da-6089-4fb4-be56-29635a38651f-kube-api-access-g4fjd\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.906848 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4fjd\" (UniqueName: \"kubernetes.io/projected/d94326da-6089-4fb4-be56-29635a38651f-kube-api-access-g4fjd\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.907778 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-catalog-content\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.907934 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-utilities\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.908385 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-utilities\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.908702 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-catalog-content\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:09 crc kubenswrapper[4817]: I0314 05:35:09.948391 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4fjd\" (UniqueName: \"kubernetes.io/projected/d94326da-6089-4fb4-be56-29635a38651f-kube-api-access-g4fjd\") pod \"redhat-operators-cs45w\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.005033 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:10 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:10 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:10 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.005106 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.039507 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.214827 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nwvm"] Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.246615 4817 generic.go:334] "Generic (PLEG): container finished" podID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerID="c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452" exitCode=0 Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.246723 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n45mx" event={"ID":"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf","Type":"ContainerDied","Data":"c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.246774 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n45mx" event={"ID":"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf","Type":"ContainerStarted","Data":"4ba88faca6bc914d6a686758cf8523ad3f2440301219c1ca8ccaec18c52b5feb"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.282035 4817 generic.go:334] "Generic (PLEG): container finished" podID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerID="a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab" exitCode=0 Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.282109 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24f8l" event={"ID":"655a63e0-d806-4b09-a33f-aef9c8c58b54","Type":"ContainerDied","Data":"a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.282138 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24f8l" event={"ID":"655a63e0-d806-4b09-a33f-aef9c8c58b54","Type":"ContainerStarted","Data":"6fd8b765096597d6b4f83a5751db41b82cd0f4ee5b532a8f573f96eced12e110"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.300599 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" event={"ID":"354d0464-c5a5-483d-ad85-4961ce201392","Type":"ContainerStarted","Data":"e5bc1099a3efa7d6d7c571da434dcb93e08529e4137d37d4997a0b9290d62ff1"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.301848 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.339860 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.357007 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01a0130a-327e-4012-9c36-3a5de6906b9e","Type":"ContainerStarted","Data":"d51e8b900462b99acc149dc940c4ed29e623843dc239f2556d869661f08eefcb"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.362432 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d80934b-65c8-4f63-a064-a4273672ceb7","Type":"ContainerStarted","Data":"7071d8fb6b11ad99c6484fc2822dc065d45a97ef1c12cb941544959c062a196b"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.362502 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d80934b-65c8-4f63-a064-a4273672ceb7","Type":"ContainerStarted","Data":"340402512dc83f112fec2197e4f7b3dce7f242251f3bf7b366d933491b9e0104"} Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.381937 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" podStartSLOduration=7.381872598 podStartE2EDuration="7.381872598s" podCreationTimestamp="2026-03-14 05:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:10.378662955 +0000 UTC m=+164.416923721" watchObservedRunningTime="2026-03-14 05:35:10.381872598 +0000 UTC m=+164.420133344" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.414219 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.414196964 podStartE2EDuration="2.414196964s" podCreationTimestamp="2026-03-14 05:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:10.413811923 +0000 UTC m=+164.452072669" watchObservedRunningTime="2026-03-14 05:35:10.414196964 +0000 UTC m=+164.452457720" Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.698492 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cs45w"] Mar 14 05:35:10 crc kubenswrapper[4817]: I0314 05:35:10.899177 4817 ???:1] "http: TLS handshake error from 192.168.126.11:45126: no serving certificate available for the kubelet" Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.016232 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:11 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:11 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:11 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.016315 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.379522 4817 generic.go:334] "Generic (PLEG): container finished" podID="01a0130a-327e-4012-9c36-3a5de6906b9e" containerID="5f0277af6208f03becb56d744d3b828e22965827329285561d67f655b42f6708" exitCode=0 Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.379605 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01a0130a-327e-4012-9c36-3a5de6906b9e","Type":"ContainerDied","Data":"5f0277af6208f03becb56d744d3b828e22965827329285561d67f655b42f6708"} Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.387671 4817 generic.go:334] "Generic (PLEG): container finished" podID="0d80934b-65c8-4f63-a064-a4273672ceb7" containerID="7071d8fb6b11ad99c6484fc2822dc065d45a97ef1c12cb941544959c062a196b" exitCode=0 Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.387761 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d80934b-65c8-4f63-a064-a4273672ceb7","Type":"ContainerDied","Data":"7071d8fb6b11ad99c6484fc2822dc065d45a97ef1c12cb941544959c062a196b"} Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.389049 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerStarted","Data":"4c55967916e4e5fb0756aaf20d9c8482c64bea5075b7fec98771ae0a15d5bf11"} Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.391124 4817 generic.go:334] "Generic (PLEG): container finished" podID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerID="85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305" exitCode=0 Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.391550 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerDied","Data":"85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305"} Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.391588 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerStarted","Data":"6276a2f38e7db28b5ecc46c00f4b8de6d268157c061b384138a8262767e0dd63"} Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.995304 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:11 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:11 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:11 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:11 crc kubenswrapper[4817]: I0314 05:35:11.995359 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.405127 4817 generic.go:334] "Generic (PLEG): container finished" podID="d94326da-6089-4fb4-be56-29635a38651f" containerID="70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c" exitCode=0 Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.405219 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerDied","Data":"70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c"} Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.728625 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.803985 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.871780 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a0130a-327e-4012-9c36-3a5de6906b9e-kube-api-access\") pod \"01a0130a-327e-4012-9c36-3a5de6906b9e\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.871840 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d80934b-65c8-4f63-a064-a4273672ceb7-kube-api-access\") pod \"0d80934b-65c8-4f63-a064-a4273672ceb7\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.871956 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a0130a-327e-4012-9c36-3a5de6906b9e-kubelet-dir\") pod \"01a0130a-327e-4012-9c36-3a5de6906b9e\" (UID: \"01a0130a-327e-4012-9c36-3a5de6906b9e\") " Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.872064 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d80934b-65c8-4f63-a064-a4273672ceb7-kubelet-dir\") pod \"0d80934b-65c8-4f63-a064-a4273672ceb7\" (UID: \"0d80934b-65c8-4f63-a064-a4273672ceb7\") " Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.872502 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d80934b-65c8-4f63-a064-a4273672ceb7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d80934b-65c8-4f63-a064-a4273672ceb7" (UID: "0d80934b-65c8-4f63-a064-a4273672ceb7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.872460 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a0130a-327e-4012-9c36-3a5de6906b9e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01a0130a-327e-4012-9c36-3a5de6906b9e" (UID: "01a0130a-327e-4012-9c36-3a5de6906b9e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.873010 4817 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01a0130a-327e-4012-9c36-3a5de6906b9e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.873030 4817 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d80934b-65c8-4f63-a064-a4273672ceb7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.879078 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d80934b-65c8-4f63-a064-a4273672ceb7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d80934b-65c8-4f63-a064-a4273672ceb7" (UID: "0d80934b-65c8-4f63-a064-a4273672ceb7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.885757 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a0130a-327e-4012-9c36-3a5de6906b9e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01a0130a-327e-4012-9c36-3a5de6906b9e" (UID: "01a0130a-327e-4012-9c36-3a5de6906b9e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.973653 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01a0130a-327e-4012-9c36-3a5de6906b9e-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.973683 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d80934b-65c8-4f63-a064-a4273672ceb7-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.995039 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:12 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:12 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:12 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:12 crc kubenswrapper[4817]: I0314 05:35:12.995111 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.436838 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.437153 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"01a0130a-327e-4012-9c36-3a5de6906b9e","Type":"ContainerDied","Data":"d51e8b900462b99acc149dc940c4ed29e623843dc239f2556d869661f08eefcb"} Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.437184 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51e8b900462b99acc149dc940c4ed29e623843dc239f2556d869661f08eefcb" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.448827 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0d80934b-65c8-4f63-a064-a4273672ceb7","Type":"ContainerDied","Data":"340402512dc83f112fec2197e4f7b3dce7f242251f3bf7b366d933491b9e0104"} Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.448865 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340402512dc83f112fec2197e4f7b3dce7f242251f3bf7b366d933491b9e0104" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.448886 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.749101 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.753233 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rt8qf" Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.993010 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:13 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:13 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:13 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:13 crc kubenswrapper[4817]: I0314 05:35:13.993055 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:14 crc kubenswrapper[4817]: I0314 05:35:14.421386 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zc2cb" Mar 14 05:35:14 crc kubenswrapper[4817]: I0314 05:35:14.994364 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:14 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:14 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:14 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:14 crc kubenswrapper[4817]: I0314 05:35:14.994442 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:15 crc kubenswrapper[4817]: I0314 05:35:15.995089 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:15 crc kubenswrapper[4817]: [-]has-synced failed: reason withheld Mar 14 05:35:15 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:15 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:15 crc kubenswrapper[4817]: I0314 05:35:15.995215 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:16 crc kubenswrapper[4817]: I0314 05:35:16.047313 4817 ???:1] "http: TLS handshake error from 192.168.126.11:45134: no serving certificate available for the kubelet" Mar 14 05:35:16 crc kubenswrapper[4817]: I0314 05:35:16.195519 4817 ???:1] "http: TLS handshake error from 192.168.126.11:45136: no serving certificate available for the kubelet" Mar 14 05:35:16 crc kubenswrapper[4817]: I0314 05:35:16.995627 4817 patch_prober.go:28] interesting pod/router-default-5444994796-76kfl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 05:35:16 crc kubenswrapper[4817]: [+]has-synced ok Mar 14 05:35:16 crc kubenswrapper[4817]: [+]process-running ok Mar 14 05:35:16 crc kubenswrapper[4817]: healthz check failed Mar 14 05:35:16 crc kubenswrapper[4817]: I0314 05:35:16.995940 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-76kfl" podUID="eaccc4e5-8d10-4383-9aa4-576dbe31fafa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 05:35:17 crc kubenswrapper[4817]: I0314 05:35:17.996383 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:17.998510 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-76kfl" Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.600529 4817 patch_prober.go:28] interesting pod/console-f9d7485db-jnxpm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.600741 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jnxpm" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.767334 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.769566 4817 patch_prober.go:28] interesting pod/downloads-7954f5f757-gcsxl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.769558 4817 patch_prober.go:28] interesting pod/downloads-7954f5f757-gcsxl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.769629 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gcsxl" podUID="3b721585-56d3-4382-b93d-c70296e6d223" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Mar 14 05:35:18 crc kubenswrapper[4817]: I0314 05:35:18.769669 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gcsxl" podUID="3b721585-56d3-4382-b93d-c70296e6d223" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Mar 14 05:35:19 crc kubenswrapper[4817]: E0314 05:35:19.405871 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:19 crc kubenswrapper[4817]: E0314 05:35:19.407844 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:19 crc kubenswrapper[4817]: E0314 05:35:19.409511 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:19 crc kubenswrapper[4817]: E0314 05:35:19.409651 4817 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:22 crc kubenswrapper[4817]: I0314 05:35:22.988880 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h"] Mar 14 05:35:22 crc kubenswrapper[4817]: I0314 05:35:22.989588 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" podUID="354d0464-c5a5-483d-ad85-4961ce201392" containerName="controller-manager" containerID="cri-o://e5bc1099a3efa7d6d7c571da434dcb93e08529e4137d37d4997a0b9290d62ff1" gracePeriod=30 Mar 14 05:35:23 crc kubenswrapper[4817]: I0314 05:35:23.009518 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt"] Mar 14 05:35:23 crc kubenswrapper[4817]: I0314 05:35:23.009760 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" podUID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" containerName="route-controller-manager" containerID="cri-o://775ddda9e523a6b7f667de602956d605ad162e117a6539e1ee8f786f073f3328" gracePeriod=30 Mar 14 05:35:23 crc kubenswrapper[4817]: I0314 05:35:23.041046 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.041021875 podStartE2EDuration="5.041021875s" podCreationTimestamp="2026-03-14 05:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:23.037501013 +0000 UTC m=+177.075761769" watchObservedRunningTime="2026-03-14 05:35:23.041021875 +0000 UTC m=+177.079282621" Mar 14 05:35:24 crc kubenswrapper[4817]: I0314 05:35:24.578999 4817 generic.go:334] "Generic (PLEG): container finished" podID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" containerID="775ddda9e523a6b7f667de602956d605ad162e117a6539e1ee8f786f073f3328" exitCode=0 Mar 14 05:35:24 crc kubenswrapper[4817]: I0314 05:35:24.579128 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" event={"ID":"f9a95cae-c3ff-487c-a26a-ba4a32363ace","Type":"ContainerDied","Data":"775ddda9e523a6b7f667de602956d605ad162e117a6539e1ee8f786f073f3328"} Mar 14 05:35:24 crc kubenswrapper[4817]: I0314 05:35:24.580861 4817 generic.go:334] "Generic (PLEG): container finished" podID="354d0464-c5a5-483d-ad85-4961ce201392" containerID="e5bc1099a3efa7d6d7c571da434dcb93e08529e4137d37d4997a0b9290d62ff1" exitCode=0 Mar 14 05:35:24 crc kubenswrapper[4817]: I0314 05:35:24.580967 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" event={"ID":"354d0464-c5a5-483d-ad85-4961ce201392","Type":"ContainerDied","Data":"e5bc1099a3efa7d6d7c571da434dcb93e08529e4137d37d4997a0b9290d62ff1"} Mar 14 05:35:25 crc kubenswrapper[4817]: I0314 05:35:25.109551 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:35:25 crc kubenswrapper[4817]: I0314 05:35:25.121413 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aae80926-3fb7-4be8-80a0-25c27ee13a03-metrics-certs\") pod \"network-metrics-daemon-4lfsz\" (UID: \"aae80926-3fb7-4be8-80a0-25c27ee13a03\") " pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:35:25 crc kubenswrapper[4817]: I0314 05:35:25.264243 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4lfsz" Mar 14 05:35:25 crc kubenswrapper[4817]: I0314 05:35:25.851976 4817 patch_prober.go:28] interesting pod/route-controller-manager-7796bb49d4-rbmxt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 14 05:35:25 crc kubenswrapper[4817]: I0314 05:35:25.852047 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" podUID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 14 05:35:26 crc kubenswrapper[4817]: I0314 05:35:26.320950 4817 ???:1] "http: TLS handshake error from 192.168.126.11:45926: no serving certificate available for the kubelet" Mar 14 05:35:27 crc kubenswrapper[4817]: I0314 05:35:27.354264 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:35:28 crc kubenswrapper[4817]: I0314 05:35:28.553199 4817 patch_prober.go:28] interesting pod/controller-manager-f8cd57fdc-9kt5h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 14 05:35:28 crc kubenswrapper[4817]: I0314 05:35:28.553718 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" podUID="354d0464-c5a5-483d-ad85-4961ce201392" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 14 05:35:28 crc kubenswrapper[4817]: I0314 05:35:28.605886 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:35:28 crc kubenswrapper[4817]: I0314 05:35:28.609749 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:35:28 crc kubenswrapper[4817]: I0314 05:35:28.777990 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gcsxl" Mar 14 05:35:29 crc kubenswrapper[4817]: E0314 05:35:29.405394 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:29 crc kubenswrapper[4817]: E0314 05:35:29.406807 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:29 crc kubenswrapper[4817]: E0314 05:35:29.408297 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:29 crc kubenswrapper[4817]: E0314 05:35:29.408358 4817 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.652080 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" event={"ID":"f9a95cae-c3ff-487c-a26a-ba4a32363ace","Type":"ContainerDied","Data":"5b90e78406856202203d6115e69592d56214cac977c74569f58389d764fc0dba"} Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.652680 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b90e78406856202203d6115e69592d56214cac977c74569f58389d764fc0dba" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.654871 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" event={"ID":"354d0464-c5a5-483d-ad85-4961ce201392","Type":"ContainerDied","Data":"02d50e5c1f1720a22dd2981bdac31c7a0fc391f66e61ad7340a434939a021e04"} Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.654975 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d50e5c1f1720a22dd2981bdac31c7a0fc391f66e61ad7340a434939a021e04" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.704281 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.710304 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.741835 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq"] Mar 14 05:35:35 crc kubenswrapper[4817]: E0314 05:35:35.742352 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" containerName="route-controller-manager" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.742384 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" containerName="route-controller-manager" Mar 14 05:35:35 crc kubenswrapper[4817]: E0314 05:35:35.742705 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354d0464-c5a5-483d-ad85-4961ce201392" containerName="controller-manager" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.742735 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="354d0464-c5a5-483d-ad85-4961ce201392" containerName="controller-manager" Mar 14 05:35:35 crc kubenswrapper[4817]: E0314 05:35:35.742757 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a0130a-327e-4012-9c36-3a5de6906b9e" containerName="pruner" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.742767 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a0130a-327e-4012-9c36-3a5de6906b9e" containerName="pruner" Mar 14 05:35:35 crc kubenswrapper[4817]: E0314 05:35:35.742789 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d80934b-65c8-4f63-a064-a4273672ceb7" containerName="pruner" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.742799 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d80934b-65c8-4f63-a064-a4273672ceb7" containerName="pruner" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.742983 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a0130a-327e-4012-9c36-3a5de6906b9e" containerName="pruner" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.743006 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d80934b-65c8-4f63-a064-a4273672ceb7" containerName="pruner" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.743025 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" containerName="route-controller-manager" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.743043 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="354d0464-c5a5-483d-ad85-4961ce201392" containerName="controller-manager" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.743776 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.760108 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq"] Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860470 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-client-ca\") pod \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860538 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-client-ca\") pod \"354d0464-c5a5-483d-ad85-4961ce201392\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860565 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-config\") pod \"354d0464-c5a5-483d-ad85-4961ce201392\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860603 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-config\") pod \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860642 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-proxy-ca-bundles\") pod \"354d0464-c5a5-483d-ad85-4961ce201392\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860661 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lhv\" (UniqueName: \"kubernetes.io/projected/354d0464-c5a5-483d-ad85-4961ce201392-kube-api-access-r4lhv\") pod \"354d0464-c5a5-483d-ad85-4961ce201392\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860678 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/354d0464-c5a5-483d-ad85-4961ce201392-serving-cert\") pod \"354d0464-c5a5-483d-ad85-4961ce201392\" (UID: \"354d0464-c5a5-483d-ad85-4961ce201392\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860704 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfbn\" (UniqueName: \"kubernetes.io/projected/f9a95cae-c3ff-487c-a26a-ba4a32363ace-kube-api-access-8hfbn\") pod \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860728 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a95cae-c3ff-487c-a26a-ba4a32363ace-serving-cert\") pod \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\" (UID: \"f9a95cae-c3ff-487c-a26a-ba4a32363ace\") " Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860868 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6hh\" (UniqueName: \"kubernetes.io/projected/23f443ac-640d-4acc-95c7-c757a8f111f8-kube-api-access-gr6hh\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860928 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-config\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.860952 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-client-ca\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.861024 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23f443ac-640d-4acc-95c7-c757a8f111f8-serving-cert\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.861425 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-client-ca" (OuterVolumeSpecName: "client-ca") pod "354d0464-c5a5-483d-ad85-4961ce201392" (UID: "354d0464-c5a5-483d-ad85-4961ce201392"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.861506 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-config" (OuterVolumeSpecName: "config") pod "354d0464-c5a5-483d-ad85-4961ce201392" (UID: "354d0464-c5a5-483d-ad85-4961ce201392"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.861543 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-client-ca" (OuterVolumeSpecName: "client-ca") pod "f9a95cae-c3ff-487c-a26a-ba4a32363ace" (UID: "f9a95cae-c3ff-487c-a26a-ba4a32363ace"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.861677 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-config" (OuterVolumeSpecName: "config") pod "f9a95cae-c3ff-487c-a26a-ba4a32363ace" (UID: "f9a95cae-c3ff-487c-a26a-ba4a32363ace"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.861811 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "354d0464-c5a5-483d-ad85-4961ce201392" (UID: "354d0464-c5a5-483d-ad85-4961ce201392"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.962326 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23f443ac-640d-4acc-95c7-c757a8f111f8-serving-cert\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963027 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6hh\" (UniqueName: \"kubernetes.io/projected/23f443ac-640d-4acc-95c7-c757a8f111f8-kube-api-access-gr6hh\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963087 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-config\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963123 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-client-ca\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963258 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963278 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963289 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963302 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a95cae-c3ff-487c-a26a-ba4a32363ace-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.963314 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/354d0464-c5a5-483d-ad85-4961ce201392-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.964032 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-client-ca\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.965670 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-config\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.966353 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23f443ac-640d-4acc-95c7-c757a8f111f8-serving-cert\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:35 crc kubenswrapper[4817]: I0314 05:35:35.978522 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6hh\" (UniqueName: \"kubernetes.io/projected/23f443ac-640d-4acc-95c7-c757a8f111f8-kube-api-access-gr6hh\") pod \"route-controller-manager-68f5b58f95-dwkkq\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:36 crc kubenswrapper[4817]: I0314 05:35:36.083528 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:36 crc kubenswrapper[4817]: I0314 05:35:36.661226 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pk9ws_71e0f963-52a3-45c7-a104-bc2a081c6e8e/kube-multus-additional-cni-plugins/0.log" Mar 14 05:35:36 crc kubenswrapper[4817]: I0314 05:35:36.661315 4817 generic.go:334] "Generic (PLEG): container finished" podID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" exitCode=137 Mar 14 05:35:36 crc kubenswrapper[4817]: I0314 05:35:36.661410 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h" Mar 14 05:35:36 crc kubenswrapper[4817]: I0314 05:35:36.661421 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" event={"ID":"71e0f963-52a3-45c7-a104-bc2a081c6e8e","Type":"ContainerDied","Data":"f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca"} Mar 14 05:35:36 crc kubenswrapper[4817]: I0314 05:35:36.661473 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.246080 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss"] Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.257939 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss"] Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.258025 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.792611 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-serving-cert\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.792661 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prznb\" (UniqueName: \"kubernetes.io/projected/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-kube-api-access-prznb\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.792760 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-config\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.792876 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-client-ca\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.792981 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-proxy-ca-bundles\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.894239 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-serving-cert\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.894300 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prznb\" (UniqueName: \"kubernetes.io/projected/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-kube-api-access-prznb\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.894337 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-config\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.894397 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-client-ca\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.894486 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-proxy-ca-bundles\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.895969 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-proxy-ca-bundles\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.898159 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-config\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.898261 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-client-ca\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.905530 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-serving-cert\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.916545 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prznb\" (UniqueName: \"kubernetes.io/projected/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-kube-api-access-prznb\") pod \"controller-manager-6b7cf8c6db-b8wss\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:38 crc kubenswrapper[4817]: I0314 05:35:38.997211 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k2dxk" Mar 14 05:35:39 crc kubenswrapper[4817]: I0314 05:35:39.176951 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.225254 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.225420 4817 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 05:35:39 crc kubenswrapper[4817]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 14 05:35:39 crc kubenswrapper[4817]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56rnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29557774-2vhw8_openshift-infra(b35f8ad5-461a-4c6c-aba1-56b3358990f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 14 05:35:39 crc kubenswrapper[4817]: > logger="UnhandledError" Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.226974 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" podUID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.403304 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.403773 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.404428 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.404463 4817 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:39 crc kubenswrapper[4817]: E0314 05:35:39.819412 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" podUID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" Mar 14 05:35:42 crc kubenswrapper[4817]: I0314 05:35:42.972492 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a95cae-c3ff-487c-a26a-ba4a32363ace-kube-api-access-8hfbn" (OuterVolumeSpecName: "kube-api-access-8hfbn") pod "f9a95cae-c3ff-487c-a26a-ba4a32363ace" (UID: "f9a95cae-c3ff-487c-a26a-ba4a32363ace"). InnerVolumeSpecName "kube-api-access-8hfbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:42 crc kubenswrapper[4817]: I0314 05:35:42.972661 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a95cae-c3ff-487c-a26a-ba4a32363ace-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f9a95cae-c3ff-487c-a26a-ba4a32363ace" (UID: "f9a95cae-c3ff-487c-a26a-ba4a32363ace"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:42 crc kubenswrapper[4817]: I0314 05:35:42.972675 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354d0464-c5a5-483d-ad85-4961ce201392-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "354d0464-c5a5-483d-ad85-4961ce201392" (UID: "354d0464-c5a5-483d-ad85-4961ce201392"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:42 crc kubenswrapper[4817]: I0314 05:35:42.973048 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354d0464-c5a5-483d-ad85-4961ce201392-kube-api-access-r4lhv" (OuterVolumeSpecName: "kube-api-access-r4lhv") pod "354d0464-c5a5-483d-ad85-4961ce201392" (UID: "354d0464-c5a5-483d-ad85-4961ce201392"). InnerVolumeSpecName "kube-api-access-r4lhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.040939 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.054714 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4lhv\" (UniqueName: \"kubernetes.io/projected/354d0464-c5a5-483d-ad85-4961ce201392-kube-api-access-r4lhv\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.054785 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/354d0464-c5a5-483d-ad85-4961ce201392-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.054808 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hfbn\" (UniqueName: \"kubernetes.io/projected/f9a95cae-c3ff-487c-a26a-ba4a32363ace-kube-api-access-8hfbn\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.054823 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a95cae-c3ff-487c-a26a-ba4a32363ace-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.133156 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.289049 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.295766 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7796bb49d4-rbmxt"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.306440 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.315083 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f8cd57fdc-9kt5h"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.635530 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.636816 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.642068 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.642460 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.672439 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.763078 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14448c84-158d-40ca-9048-57de369cd74a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.763226 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14448c84-158d-40ca-9048-57de369cd74a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.865352 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14448c84-158d-40ca-9048-57de369cd74a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.865470 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14448c84-158d-40ca-9048-57de369cd74a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.865563 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14448c84-158d-40ca-9048-57de369cd74a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.893819 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14448c84-158d-40ca-9048-57de369cd74a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:43 crc kubenswrapper[4817]: I0314 05:35:43.960863 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:44 crc kubenswrapper[4817]: I0314 05:35:44.750698 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354d0464-c5a5-483d-ad85-4961ce201392" path="/var/lib/kubelet/pods/354d0464-c5a5-483d-ad85-4961ce201392/volumes" Mar 14 05:35:44 crc kubenswrapper[4817]: I0314 05:35:44.751507 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a95cae-c3ff-487c-a26a-ba4a32363ace" path="/var/lib/kubelet/pods/f9a95cae-c3ff-487c-a26a-ba4a32363ace/volumes" Mar 14 05:35:46 crc kubenswrapper[4817]: E0314 05:35:46.202304 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 05:35:46 crc kubenswrapper[4817]: E0314 05:35:46.202477 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdhv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h6v7h_openshift-marketplace(213ab431-bf6b-41ea-8117-d3406d2b654a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:46 crc kubenswrapper[4817]: E0314 05:35:46.203672 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h6v7h" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" Mar 14 05:35:47 crc kubenswrapper[4817]: I0314 05:35:47.854988 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.029655 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.030544 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.044931 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.147257 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.147312 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-var-lock\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.147361 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.248801 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.249328 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-var-lock\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.249408 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-var-lock\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.249483 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.249539 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.269575 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kube-api-access\") pod \"installer-9-crc\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: I0314 05:35:49.372429 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:35:49 crc kubenswrapper[4817]: E0314 05:35:49.404076 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:49 crc kubenswrapper[4817]: E0314 05:35:49.404450 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:49 crc kubenswrapper[4817]: E0314 05:35:49.404665 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 14 05:35:49 crc kubenswrapper[4817]: E0314 05:35:49.404691 4817 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:49 crc kubenswrapper[4817]: E0314 05:35:49.911843 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h6v7h" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" Mar 14 05:35:50 crc kubenswrapper[4817]: E0314 05:35:50.000598 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 05:35:50 crc kubenswrapper[4817]: E0314 05:35:50.000866 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlrgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hbjqr_openshift-marketplace(c132937c-20aa-47d7-903b-92a9ec65ba6f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:50 crc kubenswrapper[4817]: E0314 05:35:50.002143 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hbjqr" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" Mar 14 05:35:50 crc kubenswrapper[4817]: E0314 05:35:50.018882 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 14 05:35:50 crc kubenswrapper[4817]: E0314 05:35:50.019076 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfwll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9nwvm_openshift-marketplace(6cad99d4-915e-406a-bca8-2b58fdc7c7ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:50 crc kubenswrapper[4817]: E0314 05:35:50.021088 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9nwvm" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.545844 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9nwvm" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.546817 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hbjqr" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.653207 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.653466 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxjcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mxvkz_openshift-marketplace(3bf969ab-d18a-43ef-88be-3e1337f14b4d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.654779 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mxvkz" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.656271 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.656414 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vc68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9p69t_openshift-marketplace(2c5f53ee-2afc-4fe8-a17c-10c9808edac2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:51 crc kubenswrapper[4817]: E0314 05:35:51.657622 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9p69t" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.886042 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mxvkz" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.886402 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9p69t" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" Mar 14 05:35:52 crc kubenswrapper[4817]: I0314 05:35:52.918817 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pk9ws_71e0f963-52a3-45c7-a104-bc2a081c6e8e/kube-multus-additional-cni-plugins/0.log" Mar 14 05:35:52 crc kubenswrapper[4817]: I0314 05:35:52.919191 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" event={"ID":"71e0f963-52a3-45c7-a104-bc2a081c6e8e","Type":"ContainerDied","Data":"0cb600e8af5c5e3e70b1b06aa1ab1268afd674d7527c6d613f10137a33cda00d"} Mar 14 05:35:52 crc kubenswrapper[4817]: I0314 05:35:52.919219 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb600e8af5c5e3e70b1b06aa1ab1268afd674d7527c6d613f10137a33cda00d" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.974539 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.974760 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwfdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-24f8l_openshift-marketplace(655a63e0-d806-4b09-a33f-aef9c8c58b54): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.976960 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-24f8l" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.992349 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.992618 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jxws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n45mx_openshift-marketplace(7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:52 crc kubenswrapper[4817]: E0314 05:35:52.993924 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n45mx" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.027122 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-pk9ws_71e0f963-52a3-45c7-a104-bc2a081c6e8e/kube-multus-additional-cni-plugins/0.log" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.027202 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:35:53 crc kubenswrapper[4817]: E0314 05:35:53.064790 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 14 05:35:53 crc kubenswrapper[4817]: E0314 05:35:53.064962 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4fjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cs45w_openshift-marketplace(d94326da-6089-4fb4-be56-29635a38651f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 05:35:53 crc kubenswrapper[4817]: E0314 05:35:53.066283 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cs45w" podUID="d94326da-6089-4fb4-be56-29635a38651f" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.105718 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71e0f963-52a3-45c7-a104-bc2a081c6e8e-cni-sysctl-allowlist\") pod \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106104 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71e0f963-52a3-45c7-a104-bc2a081c6e8e-tuning-conf-dir\") pod \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106126 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbcwq\" (UniqueName: \"kubernetes.io/projected/71e0f963-52a3-45c7-a104-bc2a081c6e8e-kube-api-access-wbcwq\") pod \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106158 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/71e0f963-52a3-45c7-a104-bc2a081c6e8e-ready\") pod \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\" (UID: \"71e0f963-52a3-45c7-a104-bc2a081c6e8e\") " Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106185 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71e0f963-52a3-45c7-a104-bc2a081c6e8e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "71e0f963-52a3-45c7-a104-bc2a081c6e8e" (UID: "71e0f963-52a3-45c7-a104-bc2a081c6e8e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106535 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e0f963-52a3-45c7-a104-bc2a081c6e8e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "71e0f963-52a3-45c7-a104-bc2a081c6e8e" (UID: "71e0f963-52a3-45c7-a104-bc2a081c6e8e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106865 4817 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71e0f963-52a3-45c7-a104-bc2a081c6e8e-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.106915 4817 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71e0f963-52a3-45c7-a104-bc2a081c6e8e-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.107071 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e0f963-52a3-45c7-a104-bc2a081c6e8e-ready" (OuterVolumeSpecName: "ready") pod "71e0f963-52a3-45c7-a104-bc2a081c6e8e" (UID: "71e0f963-52a3-45c7-a104-bc2a081c6e8e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.113568 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e0f963-52a3-45c7-a104-bc2a081c6e8e-kube-api-access-wbcwq" (OuterVolumeSpecName: "kube-api-access-wbcwq") pod "71e0f963-52a3-45c7-a104-bc2a081c6e8e" (UID: "71e0f963-52a3-45c7-a104-bc2a081c6e8e"). InnerVolumeSpecName "kube-api-access-wbcwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.210274 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbcwq\" (UniqueName: \"kubernetes.io/projected/71e0f963-52a3-45c7-a104-bc2a081c6e8e-kube-api-access-wbcwq\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.210322 4817 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/71e0f963-52a3-45c7-a104-bc2a081c6e8e-ready\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.334700 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4lfsz"] Mar 14 05:35:53 crc kubenswrapper[4817]: W0314 05:35:53.338134 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae80926_3fb7_4be8_80a0_25c27ee13a03.slice/crio-fa60ff5f5c192b6e68d751c0cb739380f2964016d26efced6b1aac79d82ccf03 WatchSource:0}: Error finding container fa60ff5f5c192b6e68d751c0cb739380f2964016d26efced6b1aac79d82ccf03: Status 404 returned error can't find the container with id fa60ff5f5c192b6e68d751c0cb739380f2964016d26efced6b1aac79d82ccf03 Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.418468 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq"] Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.422546 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss"] Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.479746 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.486045 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 05:35:53 crc kubenswrapper[4817]: W0314 05:35:53.517246 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfb01e662_5d03_4606_bc4e_709eb9e76cd4.slice/crio-cb16c466b0c00cb02c693c4f101822f81c27ed5c79151e10ffd7110377c21d6a WatchSource:0}: Error finding container cb16c466b0c00cb02c693c4f101822f81c27ed5c79151e10ffd7110377c21d6a: Status 404 returned error can't find the container with id cb16c466b0c00cb02c693c4f101822f81c27ed5c79151e10ffd7110377c21d6a Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.722336 4817 csr.go:261] certificate signing request csr-xpj6q is approved, waiting to be issued Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.732723 4817 csr.go:257] certificate signing request csr-xpj6q is issued Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.955360 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" event={"ID":"7f475fb9-425e-4a0a-871b-7f4c8681f5d7","Type":"ContainerStarted","Data":"32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.955848 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" event={"ID":"7f475fb9-425e-4a0a-871b-7f4c8681f5d7","Type":"ContainerStarted","Data":"486436a1050dfb07f21a37d22c64cb06179f04bae8725f139ee6947a98c905fe"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.955481 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" podUID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" containerName="controller-manager" containerID="cri-o://32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc" gracePeriod=30 Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.956142 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.960181 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb01e662-5d03-4606-bc4e-709eb9e76cd4","Type":"ContainerStarted","Data":"cb16c466b0c00cb02c693c4f101822f81c27ed5c79151e10ffd7110377c21d6a"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.974949 4817 patch_prober.go:28] interesting pod/controller-manager-6b7cf8c6db-b8wss container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:36284->10.217.0.58:8443: read: connection reset by peer" start-of-body= Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.975044 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" podUID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:36284->10.217.0.58:8443: read: connection reset by peer" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.977315 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" podStartSLOduration=30.977294298 podStartE2EDuration="30.977294298s" podCreationTimestamp="2026-03-14 05:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:53.975630109 +0000 UTC m=+208.013890875" watchObservedRunningTime="2026-03-14 05:35:53.977294298 +0000 UTC m=+208.015555044" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.977798 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" event={"ID":"23f443ac-640d-4acc-95c7-c757a8f111f8","Type":"ContainerStarted","Data":"5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.977847 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" event={"ID":"23f443ac-640d-4acc-95c7-c757a8f111f8","Type":"ContainerStarted","Data":"e1b32479ee4d211ec63f706a35eb5e636e1506c678a369f95ac86479d42d276b"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.977992 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" podUID="23f443ac-640d-4acc-95c7-c757a8f111f8" containerName="route-controller-manager" containerID="cri-o://5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220" gracePeriod=30 Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.978513 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.986244 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" event={"ID":"aae80926-3fb7-4be8-80a0-25c27ee13a03","Type":"ContainerStarted","Data":"464f374cacd1454743f35b9d3c05b006bc275ca4ad19d8e13e66af41cf8f97f1"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.986304 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" event={"ID":"aae80926-3fb7-4be8-80a0-25c27ee13a03","Type":"ContainerStarted","Data":"fa60ff5f5c192b6e68d751c0cb739380f2964016d26efced6b1aac79d82ccf03"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.989879 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14448c84-158d-40ca-9048-57de369cd74a","Type":"ContainerStarted","Data":"2c24dc301adfcbd14833aab50678181c9858cbd5a13d44356c2cd832843358b7"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.992928 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" event={"ID":"b35f8ad5-461a-4c6c-aba1-56b3358990f8","Type":"ContainerDied","Data":"75fb71a5bb1018a5ae6aee27017f19c9cf758b921a168f31b464fa1c07dab16c"} Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.992907 4817 generic.go:334] "Generic (PLEG): container finished" podID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" containerID="75fb71a5bb1018a5ae6aee27017f19c9cf758b921a168f31b464fa1c07dab16c" exitCode=0 Mar 14 05:35:53 crc kubenswrapper[4817]: I0314 05:35:53.993249 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-pk9ws" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:53.999725 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" podStartSLOduration=30.999699994 podStartE2EDuration="30.999699994s" podCreationTimestamp="2026-03-14 05:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:53.996055887 +0000 UTC m=+208.034316653" watchObservedRunningTime="2026-03-14 05:35:53.999699994 +0000 UTC m=+208.037960740" Mar 14 05:35:54 crc kubenswrapper[4817]: E0314 05:35:54.001169 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n45mx" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" Mar 14 05:35:54 crc kubenswrapper[4817]: E0314 05:35:54.004565 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cs45w" podUID="d94326da-6089-4fb4-be56-29635a38651f" Mar 14 05:35:54 crc kubenswrapper[4817]: E0314 05:35:54.006466 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-24f8l" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.098627 4817 patch_prober.go:28] interesting pod/route-controller-manager-68f5b58f95-dwkkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:45694->10.217.0.57:8443: read: connection reset by peer" start-of-body= Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.098711 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" podUID="23f443ac-640d-4acc-95c7-c757a8f111f8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:45694->10.217.0.57:8443: read: connection reset by peer" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.243668 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pk9ws"] Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.247646 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-pk9ws"] Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.355722 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.428135 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-client-ca\") pod \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.428407 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prznb\" (UniqueName: \"kubernetes.io/projected/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-kube-api-access-prznb\") pod \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.428492 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-config\") pod \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.428542 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-proxy-ca-bundles\") pod \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.428597 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-serving-cert\") pod \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\" (UID: \"7f475fb9-425e-4a0a-871b-7f4c8681f5d7\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.429686 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7f475fb9-425e-4a0a-871b-7f4c8681f5d7" (UID: "7f475fb9-425e-4a0a-871b-7f4c8681f5d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.429768 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-config" (OuterVolumeSpecName: "config") pod "7f475fb9-425e-4a0a-871b-7f4c8681f5d7" (UID: "7f475fb9-425e-4a0a-871b-7f4c8681f5d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.430021 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f475fb9-425e-4a0a-871b-7f4c8681f5d7" (UID: "7f475fb9-425e-4a0a-871b-7f4c8681f5d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.435089 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-kube-api-access-prznb" (OuterVolumeSpecName: "kube-api-access-prznb") pod "7f475fb9-425e-4a0a-871b-7f4c8681f5d7" (UID: "7f475fb9-425e-4a0a-871b-7f4c8681f5d7"). InnerVolumeSpecName "kube-api-access-prznb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.435668 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f475fb9-425e-4a0a-871b-7f4c8681f5d7" (UID: "7f475fb9-425e-4a0a-871b-7f4c8681f5d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.531372 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prznb\" (UniqueName: \"kubernetes.io/projected/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-kube-api-access-prznb\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.531438 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.531451 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.531462 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.531473 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f475fb9-425e-4a0a-871b-7f4c8681f5d7-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.735068 4817 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-20 09:33:18.686678882 +0000 UTC Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.735098 4817 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6747h57m23.951582549s for next certificate rotation Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.745083 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" path="/var/lib/kubelet/pods/71e0f963-52a3-45c7-a104-bc2a081c6e8e/volumes" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.821158 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-68f5b58f95-dwkkq_23f443ac-640d-4acc-95c7-c757a8f111f8/route-controller-manager/0.log" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.821233 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.937581 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-config\") pod \"23f443ac-640d-4acc-95c7-c757a8f111f8\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.937676 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23f443ac-640d-4acc-95c7-c757a8f111f8-serving-cert\") pod \"23f443ac-640d-4acc-95c7-c757a8f111f8\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.937768 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-client-ca\") pod \"23f443ac-640d-4acc-95c7-c757a8f111f8\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.938002 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr6hh\" (UniqueName: \"kubernetes.io/projected/23f443ac-640d-4acc-95c7-c757a8f111f8-kube-api-access-gr6hh\") pod \"23f443ac-640d-4acc-95c7-c757a8f111f8\" (UID: \"23f443ac-640d-4acc-95c7-c757a8f111f8\") " Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.938640 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-config" (OuterVolumeSpecName: "config") pod "23f443ac-640d-4acc-95c7-c757a8f111f8" (UID: "23f443ac-640d-4acc-95c7-c757a8f111f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.938816 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-client-ca" (OuterVolumeSpecName: "client-ca") pod "23f443ac-640d-4acc-95c7-c757a8f111f8" (UID: "23f443ac-640d-4acc-95c7-c757a8f111f8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.943670 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f443ac-640d-4acc-95c7-c757a8f111f8-kube-api-access-gr6hh" (OuterVolumeSpecName: "kube-api-access-gr6hh") pod "23f443ac-640d-4acc-95c7-c757a8f111f8" (UID: "23f443ac-640d-4acc-95c7-c757a8f111f8"). InnerVolumeSpecName "kube-api-access-gr6hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:54 crc kubenswrapper[4817]: I0314 05:35:54.943965 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f443ac-640d-4acc-95c7-c757a8f111f8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23f443ac-640d-4acc-95c7-c757a8f111f8" (UID: "23f443ac-640d-4acc-95c7-c757a8f111f8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.000750 4817 generic.go:334] "Generic (PLEG): container finished" podID="14448c84-158d-40ca-9048-57de369cd74a" containerID="88299106c9cfcf4dd7fdaf6bab398a3c159d9047a59b9fc059d82609de2348d1" exitCode=0 Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.000811 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14448c84-158d-40ca-9048-57de369cd74a","Type":"ContainerDied","Data":"88299106c9cfcf4dd7fdaf6bab398a3c159d9047a59b9fc059d82609de2348d1"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.003725 4817 generic.go:334] "Generic (PLEG): container finished" podID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" containerID="32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc" exitCode=0 Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.003782 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" event={"ID":"7f475fb9-425e-4a0a-871b-7f4c8681f5d7","Type":"ContainerDied","Data":"32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.003804 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" event={"ID":"7f475fb9-425e-4a0a-871b-7f4c8681f5d7","Type":"ContainerDied","Data":"486436a1050dfb07f21a37d22c64cb06179f04bae8725f139ee6947a98c905fe"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.003825 4817 scope.go:117] "RemoveContainer" containerID="32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.004142 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.006569 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb01e662-5d03-4606-bc4e-709eb9e76cd4","Type":"ContainerStarted","Data":"7f2843807302d3d8313a9a1244abe452999cec00c12125ccc8b8f592bda5088f"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.010426 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-68f5b58f95-dwkkq_23f443ac-640d-4acc-95c7-c757a8f111f8/route-controller-manager/0.log" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.010493 4817 generic.go:334] "Generic (PLEG): container finished" podID="23f443ac-640d-4acc-95c7-c757a8f111f8" containerID="5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220" exitCode=255 Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.010572 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.010732 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" event={"ID":"23f443ac-640d-4acc-95c7-c757a8f111f8","Type":"ContainerDied","Data":"5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.010771 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq" event={"ID":"23f443ac-640d-4acc-95c7-c757a8f111f8","Type":"ContainerDied","Data":"e1b32479ee4d211ec63f706a35eb5e636e1506c678a369f95ac86479d42d276b"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.013677 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4lfsz" event={"ID":"aae80926-3fb7-4be8-80a0-25c27ee13a03","Type":"ContainerStarted","Data":"d107ea6aaf0d90907a389bbc7d60dbce6bfd7ee4293fbe00e2a096e7f54c8f50"} Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.031677 4817 scope.go:117] "RemoveContainer" containerID="32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc" Mar 14 05:35:55 crc kubenswrapper[4817]: E0314 05:35:55.032188 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc\": container with ID starting with 32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc not found: ID does not exist" containerID="32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.032251 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc"} err="failed to get container status \"32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc\": rpc error: code = NotFound desc = could not find container \"32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc\": container with ID starting with 32ef2a89aeedb6c7f83e9238e21f441963bba88464111f2c91f0f2f3e1bee9fc not found: ID does not exist" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.032286 4817 scope.go:117] "RemoveContainer" containerID="5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.042476 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr6hh\" (UniqueName: \"kubernetes.io/projected/23f443ac-640d-4acc-95c7-c757a8f111f8-kube-api-access-gr6hh\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.044002 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.044039 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23f443ac-640d-4acc-95c7-c757a8f111f8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.044052 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23f443ac-640d-4acc-95c7-c757a8f111f8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.056414 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4lfsz" podStartSLOduration=145.056282065 podStartE2EDuration="2m25.056282065s" podCreationTimestamp="2026-03-14 05:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:55.039072111 +0000 UTC m=+209.077332867" watchObservedRunningTime="2026-03-14 05:35:55.056282065 +0000 UTC m=+209.094542801" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.062782 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.066757 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b7cf8c6db-b8wss"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.074910 4817 scope.go:117] "RemoveContainer" containerID="5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220" Mar 14 05:35:55 crc kubenswrapper[4817]: E0314 05:35:55.076121 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220\": container with ID starting with 5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220 not found: ID does not exist" containerID="5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.076176 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220"} err="failed to get container status \"5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220\": rpc error: code = NotFound desc = could not find container \"5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220\": container with ID starting with 5943003a9dbfeb9a70dcaf4bde9740d6d827eca7eaa3997027567cd5dcb81220 not found: ID does not exist" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.085475 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.085460439 podStartE2EDuration="6.085460439s" podCreationTimestamp="2026-03-14 05:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:55.074593581 +0000 UTC m=+209.112854337" watchObservedRunningTime="2026-03-14 05:35:55.085460439 +0000 UTC m=+209.123721195" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.087742 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.091515 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f5b58f95-dwkkq"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.237680 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh"] Mar 14 05:35:55 crc kubenswrapper[4817]: E0314 05:35:55.238108 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" containerName="controller-manager" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238128 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" containerName="controller-manager" Mar 14 05:35:55 crc kubenswrapper[4817]: E0314 05:35:55.238146 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238173 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:55 crc kubenswrapper[4817]: E0314 05:35:55.238191 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f443ac-640d-4acc-95c7-c757a8f111f8" containerName="route-controller-manager" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238197 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f443ac-640d-4acc-95c7-c757a8f111f8" containerName="route-controller-manager" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238346 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" containerName="controller-manager" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238363 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f443ac-640d-4acc-95c7-c757a8f111f8" containerName="route-controller-manager" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238370 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e0f963-52a3-45c7-a104-bc2a081c6e8e" containerName="kube-multus-additional-cni-plugins" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.238807 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.246102 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.246911 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.247622 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.248542 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.248706 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.248875 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.265050 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-674c8cfddd-x6rtp"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.267116 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.269333 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.269511 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.271107 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.271272 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.271420 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.271549 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.275118 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.276428 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.280294 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.283601 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674c8cfddd-x6rtp"] Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.347410 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56rnd\" (UniqueName: \"kubernetes.io/projected/b35f8ad5-461a-4c6c-aba1-56b3358990f8-kube-api-access-56rnd\") pod \"b35f8ad5-461a-4c6c-aba1-56b3358990f8\" (UID: \"b35f8ad5-461a-4c6c-aba1-56b3358990f8\") " Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.347776 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7285cb0-bab0-440c-9bfa-73030beeea10-serving-cert\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.347887 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-config\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.347983 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-client-ca\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.348081 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-client-ca\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.348165 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-config\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.348235 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbkw6\" (UniqueName: \"kubernetes.io/projected/1466c984-39e9-447b-99d9-9eb03c521496-kube-api-access-tbkw6\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.348307 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh97x\" (UniqueName: \"kubernetes.io/projected/f7285cb0-bab0-440c-9bfa-73030beeea10-kube-api-access-lh97x\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.348371 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1466c984-39e9-447b-99d9-9eb03c521496-serving-cert\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.348481 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-proxy-ca-bundles\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.351623 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35f8ad5-461a-4c6c-aba1-56b3358990f8-kube-api-access-56rnd" (OuterVolumeSpecName: "kube-api-access-56rnd") pod "b35f8ad5-461a-4c6c-aba1-56b3358990f8" (UID: "b35f8ad5-461a-4c6c-aba1-56b3358990f8"). InnerVolumeSpecName "kube-api-access-56rnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.449932 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-config\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450218 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-client-ca\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450371 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-client-ca\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450481 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-config\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450645 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbkw6\" (UniqueName: \"kubernetes.io/projected/1466c984-39e9-447b-99d9-9eb03c521496-kube-api-access-tbkw6\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450736 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh97x\" (UniqueName: \"kubernetes.io/projected/f7285cb0-bab0-440c-9bfa-73030beeea10-kube-api-access-lh97x\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450819 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1466c984-39e9-447b-99d9-9eb03c521496-serving-cert\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.450952 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-proxy-ca-bundles\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.451064 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7285cb0-bab0-440c-9bfa-73030beeea10-serving-cert\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.451174 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56rnd\" (UniqueName: \"kubernetes.io/projected/b35f8ad5-461a-4c6c-aba1-56b3358990f8-kube-api-access-56rnd\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.453347 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-client-ca\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.454082 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-proxy-ca-bundles\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.454146 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-client-ca\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.454852 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-config\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.455048 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-config\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.455059 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7285cb0-bab0-440c-9bfa-73030beeea10-serving-cert\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.463579 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1466c984-39e9-447b-99d9-9eb03c521496-serving-cert\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.478536 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh97x\" (UniqueName: \"kubernetes.io/projected/f7285cb0-bab0-440c-9bfa-73030beeea10-kube-api-access-lh97x\") pod \"route-controller-manager-7846d5d576-9s7mh\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.482053 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbkw6\" (UniqueName: \"kubernetes.io/projected/1466c984-39e9-447b-99d9-9eb03c521496-kube-api-access-tbkw6\") pod \"controller-manager-674c8cfddd-x6rtp\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.604839 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.616036 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.736105 4817 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-04 06:10:37.4655756 +0000 UTC Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.736362 4817 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7104h34m41.729215795s for next certificate rotation Mar 14 05:35:55 crc kubenswrapper[4817]: I0314 05:35:55.784618 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674c8cfddd-x6rtp"] Mar 14 05:35:55 crc kubenswrapper[4817]: W0314 05:35:55.787939 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1466c984_39e9_447b_99d9_9eb03c521496.slice/crio-9dac33a86f19bc50f0b8b9880eaf82b6aecacf31b63c5389b19d4a95938981d2 WatchSource:0}: Error finding container 9dac33a86f19bc50f0b8b9880eaf82b6aecacf31b63c5389b19d4a95938981d2: Status 404 returned error can't find the container with id 9dac33a86f19bc50f0b8b9880eaf82b6aecacf31b63c5389b19d4a95938981d2 Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.019872 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" event={"ID":"1466c984-39e9-447b-99d9-9eb03c521496","Type":"ContainerStarted","Data":"4e9d6475738a762655f6390cb8520b0935609002d2a64b4f6a11d379595d0e57"} Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.020792 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" event={"ID":"1466c984-39e9-447b-99d9-9eb03c521496","Type":"ContainerStarted","Data":"9dac33a86f19bc50f0b8b9880eaf82b6aecacf31b63c5389b19d4a95938981d2"} Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.020849 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.022576 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" event={"ID":"b35f8ad5-461a-4c6c-aba1-56b3358990f8","Type":"ContainerDied","Data":"64917e03e397e23ce24be6b0e9bd4f2394907ac4a51464d6699bb3b132bdd4d5"} Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.022705 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64917e03e397e23ce24be6b0e9bd4f2394907ac4a51464d6699bb3b132bdd4d5" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.022589 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557774-2vhw8" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.022775 4817 patch_prober.go:28] interesting pod/controller-manager-674c8cfddd-x6rtp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.022823 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" podUID="1466c984-39e9-447b-99d9-9eb03c521496" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.029802 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh"] Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.040936 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" podStartSLOduration=13.040911371 podStartE2EDuration="13.040911371s" podCreationTimestamp="2026-03-14 05:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:56.037546732 +0000 UTC m=+210.075807518" watchObservedRunningTime="2026-03-14 05:35:56.040911371 +0000 UTC m=+210.079172117" Mar 14 05:35:56 crc kubenswrapper[4817]: W0314 05:35:56.041456 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7285cb0_bab0_440c_9bfa_73030beeea10.slice/crio-fed84654abd98d0cc5324852515ce8db24123670a2751de806be30b3d327b802 WatchSource:0}: Error finding container fed84654abd98d0cc5324852515ce8db24123670a2751de806be30b3d327b802: Status 404 returned error can't find the container with id fed84654abd98d0cc5324852515ce8db24123670a2751de806be30b3d327b802 Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.279713 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.364579 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14448c84-158d-40ca-9048-57de369cd74a-kubelet-dir\") pod \"14448c84-158d-40ca-9048-57de369cd74a\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.364914 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14448c84-158d-40ca-9048-57de369cd74a-kube-api-access\") pod \"14448c84-158d-40ca-9048-57de369cd74a\" (UID: \"14448c84-158d-40ca-9048-57de369cd74a\") " Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.364763 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14448c84-158d-40ca-9048-57de369cd74a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14448c84-158d-40ca-9048-57de369cd74a" (UID: "14448c84-158d-40ca-9048-57de369cd74a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.365183 4817 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14448c84-158d-40ca-9048-57de369cd74a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.369860 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14448c84-158d-40ca-9048-57de369cd74a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14448c84-158d-40ca-9048-57de369cd74a" (UID: "14448c84-158d-40ca-9048-57de369cd74a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.466474 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14448c84-158d-40ca-9048-57de369cd74a-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.740103 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f443ac-640d-4acc-95c7-c757a8f111f8" path="/var/lib/kubelet/pods/23f443ac-640d-4acc-95c7-c757a8f111f8/volumes" Mar 14 05:35:56 crc kubenswrapper[4817]: I0314 05:35:56.740960 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f475fb9-425e-4a0a-871b-7f4c8681f5d7" path="/var/lib/kubelet/pods/7f475fb9-425e-4a0a-871b-7f4c8681f5d7/volumes" Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.034518 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.034529 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14448c84-158d-40ca-9048-57de369cd74a","Type":"ContainerDied","Data":"2c24dc301adfcbd14833aab50678181c9858cbd5a13d44356c2cd832843358b7"} Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.034570 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c24dc301adfcbd14833aab50678181c9858cbd5a13d44356c2cd832843358b7" Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.036557 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" event={"ID":"f7285cb0-bab0-440c-9bfa-73030beeea10","Type":"ContainerStarted","Data":"7265edbfbae46aa9bd92cb5236dc8caa41cf828391626a33c6a5e9e1ad9a1840"} Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.036616 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" event={"ID":"f7285cb0-bab0-440c-9bfa-73030beeea10","Type":"ContainerStarted","Data":"fed84654abd98d0cc5324852515ce8db24123670a2751de806be30b3d327b802"} Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.041374 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:35:57 crc kubenswrapper[4817]: I0314 05:35:57.060540 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" podStartSLOduration=14.06051162 podStartE2EDuration="14.06051162s" podCreationTimestamp="2026-03-14 05:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:35:57.050041543 +0000 UTC m=+211.088302289" watchObservedRunningTime="2026-03-14 05:35:57.06051162 +0000 UTC m=+211.098772396" Mar 14 05:35:58 crc kubenswrapper[4817]: I0314 05:35:58.041177 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:35:58 crc kubenswrapper[4817]: I0314 05:35:58.046553 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.138199 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557776-ch6qq"] Mar 14 05:36:00 crc kubenswrapper[4817]: E0314 05:36:00.138502 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14448c84-158d-40ca-9048-57de369cd74a" containerName="pruner" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.138521 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="14448c84-158d-40ca-9048-57de369cd74a" containerName="pruner" Mar 14 05:36:00 crc kubenswrapper[4817]: E0314 05:36:00.138535 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" containerName="oc" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.138545 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" containerName="oc" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.138691 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" containerName="oc" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.138708 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="14448c84-158d-40ca-9048-57de369cd74a" containerName="pruner" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.139295 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.141277 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.142412 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.144385 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.151652 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557776-ch6qq"] Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.214076 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bx5\" (UniqueName: \"kubernetes.io/projected/b53bc251-d4cf-4e2c-b731-84eed88c78af-kube-api-access-49bx5\") pod \"auto-csr-approver-29557776-ch6qq\" (UID: \"b53bc251-d4cf-4e2c-b731-84eed88c78af\") " pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.316237 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bx5\" (UniqueName: \"kubernetes.io/projected/b53bc251-d4cf-4e2c-b731-84eed88c78af-kube-api-access-49bx5\") pod \"auto-csr-approver-29557776-ch6qq\" (UID: \"b53bc251-d4cf-4e2c-b731-84eed88c78af\") " pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.344013 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bx5\" (UniqueName: \"kubernetes.io/projected/b53bc251-d4cf-4e2c-b731-84eed88c78af-kube-api-access-49bx5\") pod \"auto-csr-approver-29557776-ch6qq\" (UID: \"b53bc251-d4cf-4e2c-b731-84eed88c78af\") " pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.459862 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:00 crc kubenswrapper[4817]: I0314 05:36:00.857687 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557776-ch6qq"] Mar 14 05:36:00 crc kubenswrapper[4817]: W0314 05:36:00.868356 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb53bc251_d4cf_4e2c_b731_84eed88c78af.slice/crio-397a7adf85a84b23ce8c15025af5927bf39d3ed537cd6ee76ec8e5121266b5e0 WatchSource:0}: Error finding container 397a7adf85a84b23ce8c15025af5927bf39d3ed537cd6ee76ec8e5121266b5e0: Status 404 returned error can't find the container with id 397a7adf85a84b23ce8c15025af5927bf39d3ed537cd6ee76ec8e5121266b5e0 Mar 14 05:36:01 crc kubenswrapper[4817]: I0314 05:36:01.057443 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" event={"ID":"b53bc251-d4cf-4e2c-b731-84eed88c78af","Type":"ContainerStarted","Data":"397a7adf85a84b23ce8c15025af5927bf39d3ed537cd6ee76ec8e5121266b5e0"} Mar 14 05:36:03 crc kubenswrapper[4817]: I0314 05:36:03.069371 4817 generic.go:334] "Generic (PLEG): container finished" podID="b53bc251-d4cf-4e2c-b731-84eed88c78af" containerID="77a2c3b96e5aac48d866370f2e4fee59c152932f017c1469a80968926ce9398a" exitCode=0 Mar 14 05:36:03 crc kubenswrapper[4817]: I0314 05:36:03.069470 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" event={"ID":"b53bc251-d4cf-4e2c-b731-84eed88c78af","Type":"ContainerDied","Data":"77a2c3b96e5aac48d866370f2e4fee59c152932f017c1469a80968926ce9398a"} Mar 14 05:36:04 crc kubenswrapper[4817]: I0314 05:36:04.372661 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:04 crc kubenswrapper[4817]: I0314 05:36:04.465465 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bx5\" (UniqueName: \"kubernetes.io/projected/b53bc251-d4cf-4e2c-b731-84eed88c78af-kube-api-access-49bx5\") pod \"b53bc251-d4cf-4e2c-b731-84eed88c78af\" (UID: \"b53bc251-d4cf-4e2c-b731-84eed88c78af\") " Mar 14 05:36:04 crc kubenswrapper[4817]: I0314 05:36:04.473148 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53bc251-d4cf-4e2c-b731-84eed88c78af-kube-api-access-49bx5" (OuterVolumeSpecName: "kube-api-access-49bx5") pod "b53bc251-d4cf-4e2c-b731-84eed88c78af" (UID: "b53bc251-d4cf-4e2c-b731-84eed88c78af"). InnerVolumeSpecName "kube-api-access-49bx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:04 crc kubenswrapper[4817]: I0314 05:36:04.567576 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bx5\" (UniqueName: \"kubernetes.io/projected/b53bc251-d4cf-4e2c-b731-84eed88c78af-kube-api-access-49bx5\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:05 crc kubenswrapper[4817]: I0314 05:36:05.087542 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" event={"ID":"b53bc251-d4cf-4e2c-b731-84eed88c78af","Type":"ContainerDied","Data":"397a7adf85a84b23ce8c15025af5927bf39d3ed537cd6ee76ec8e5121266b5e0"} Mar 14 05:36:05 crc kubenswrapper[4817]: I0314 05:36:05.087927 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="397a7adf85a84b23ce8c15025af5927bf39d3ed537cd6ee76ec8e5121266b5e0" Mar 14 05:36:05 crc kubenswrapper[4817]: I0314 05:36:05.088022 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557776-ch6qq" Mar 14 05:36:05 crc kubenswrapper[4817]: I0314 05:36:05.098884 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerStarted","Data":"498ceaab4a4b89652fc8991b3335b88f5f1823e7f4ec0ffeaa61b548b962321a"} Mar 14 05:36:06 crc kubenswrapper[4817]: I0314 05:36:06.105736 4817 generic.go:334] "Generic (PLEG): container finished" podID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerID="e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf" exitCode=0 Mar 14 05:36:06 crc kubenswrapper[4817]: I0314 05:36:06.105806 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbjqr" event={"ID":"c132937c-20aa-47d7-903b-92a9ec65ba6f","Type":"ContainerDied","Data":"e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf"} Mar 14 05:36:06 crc kubenswrapper[4817]: I0314 05:36:06.109270 4817 generic.go:334] "Generic (PLEG): container finished" podID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerID="498ceaab4a4b89652fc8991b3335b88f5f1823e7f4ec0ffeaa61b548b962321a" exitCode=0 Mar 14 05:36:06 crc kubenswrapper[4817]: I0314 05:36:06.109301 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerDied","Data":"498ceaab4a4b89652fc8991b3335b88f5f1823e7f4ec0ffeaa61b548b962321a"} Mar 14 05:36:06 crc kubenswrapper[4817]: I0314 05:36:06.109332 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerStarted","Data":"e7e15b68aef6edab58ae33b877b145b7eef8e3bdedd965d49939972de4b06e4b"} Mar 14 05:36:06 crc kubenswrapper[4817]: I0314 05:36:06.139932 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6v7h" podStartSLOduration=2.680319451 podStartE2EDuration="1m0.13991631s" podCreationTimestamp="2026-03-14 05:35:06 +0000 UTC" firstStartedPulling="2026-03-14 05:35:08.066524803 +0000 UTC m=+162.104785549" lastFinishedPulling="2026-03-14 05:36:05.526121662 +0000 UTC m=+219.564382408" observedRunningTime="2026-03-14 05:36:06.137397177 +0000 UTC m=+220.175657923" watchObservedRunningTime="2026-03-14 05:36:06.13991631 +0000 UTC m=+220.178177056" Mar 14 05:36:07 crc kubenswrapper[4817]: I0314 05:36:07.100820 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:36:07 crc kubenswrapper[4817]: I0314 05:36:07.101168 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:36:07 crc kubenswrapper[4817]: I0314 05:36:07.120447 4817 generic.go:334] "Generic (PLEG): container finished" podID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerID="9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434" exitCode=0 Mar 14 05:36:07 crc kubenswrapper[4817]: I0314 05:36:07.120519 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n45mx" event={"ID":"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf","Type":"ContainerDied","Data":"9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434"} Mar 14 05:36:07 crc kubenswrapper[4817]: I0314 05:36:07.124758 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbjqr" event={"ID":"c132937c-20aa-47d7-903b-92a9ec65ba6f","Type":"ContainerStarted","Data":"d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a"} Mar 14 05:36:07 crc kubenswrapper[4817]: I0314 05:36:07.161623 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hbjqr" podStartSLOduration=2.714032836 podStartE2EDuration="1m1.161606531s" podCreationTimestamp="2026-03-14 05:35:06 +0000 UTC" firstStartedPulling="2026-03-14 05:35:08.045880575 +0000 UTC m=+162.084141321" lastFinishedPulling="2026-03-14 05:36:06.49345427 +0000 UTC m=+220.531715016" observedRunningTime="2026-03-14 05:36:07.15986733 +0000 UTC m=+221.198128076" watchObservedRunningTime="2026-03-14 05:36:07.161606531 +0000 UTC m=+221.199867277" Mar 14 05:36:08 crc kubenswrapper[4817]: I0314 05:36:08.231486 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h6v7h" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="registry-server" probeResult="failure" output=< Mar 14 05:36:08 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:36:08 crc kubenswrapper[4817]: > Mar 14 05:36:09 crc kubenswrapper[4817]: I0314 05:36:09.136147 4817 generic.go:334] "Generic (PLEG): container finished" podID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerID="44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b" exitCode=0 Mar 14 05:36:09 crc kubenswrapper[4817]: I0314 05:36:09.136269 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p69t" event={"ID":"2c5f53ee-2afc-4fe8-a17c-10c9808edac2","Type":"ContainerDied","Data":"44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b"} Mar 14 05:36:09 crc kubenswrapper[4817]: I0314 05:36:09.140121 4817 generic.go:334] "Generic (PLEG): container finished" podID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerID="97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e" exitCode=0 Mar 14 05:36:09 crc kubenswrapper[4817]: I0314 05:36:09.140178 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24f8l" event={"ID":"655a63e0-d806-4b09-a33f-aef9c8c58b54","Type":"ContainerDied","Data":"97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e"} Mar 14 05:36:10 crc kubenswrapper[4817]: I0314 05:36:10.148176 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n45mx" event={"ID":"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf","Type":"ContainerStarted","Data":"e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066"} Mar 14 05:36:10 crc kubenswrapper[4817]: I0314 05:36:10.166080 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n45mx" podStartSLOduration=3.250130856 podStartE2EDuration="1m2.166059567s" podCreationTimestamp="2026-03-14 05:35:08 +0000 UTC" firstStartedPulling="2026-03-14 05:35:10.249667278 +0000 UTC m=+164.287928024" lastFinishedPulling="2026-03-14 05:36:09.165595989 +0000 UTC m=+223.203856735" observedRunningTime="2026-03-14 05:36:10.16413934 +0000 UTC m=+224.202400096" watchObservedRunningTime="2026-03-14 05:36:10.166059567 +0000 UTC m=+224.204320313" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.183371 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p69t" event={"ID":"2c5f53ee-2afc-4fe8-a17c-10c9808edac2","Type":"ContainerStarted","Data":"96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820"} Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.186765 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24f8l" event={"ID":"655a63e0-d806-4b09-a33f-aef9c8c58b54","Type":"ContainerStarted","Data":"d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481"} Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.188530 4817 generic.go:334] "Generic (PLEG): container finished" podID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerID="e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d" exitCode=0 Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.188569 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvkz" event={"ID":"3bf969ab-d18a-43ef-88be-3e1337f14b4d","Type":"ContainerDied","Data":"e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d"} Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.191089 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerStarted","Data":"6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a"} Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.193249 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerStarted","Data":"a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c"} Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.214647 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9p69t" podStartSLOduration=3.068717745 podStartE2EDuration="1m10.21463286s" podCreationTimestamp="2026-03-14 05:35:06 +0000 UTC" firstStartedPulling="2026-03-14 05:35:07.995460295 +0000 UTC m=+162.033721041" lastFinishedPulling="2026-03-14 05:36:15.14137539 +0000 UTC m=+229.179636156" observedRunningTime="2026-03-14 05:36:16.210625873 +0000 UTC m=+230.248886619" watchObservedRunningTime="2026-03-14 05:36:16.21463286 +0000 UTC m=+230.252893606" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.255662 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-24f8l" podStartSLOduration=3.372965729 podStartE2EDuration="1m8.255642241s" podCreationTimestamp="2026-03-14 05:35:08 +0000 UTC" firstStartedPulling="2026-03-14 05:35:10.298557045 +0000 UTC m=+164.336817791" lastFinishedPulling="2026-03-14 05:36:15.181233527 +0000 UTC m=+229.219494303" observedRunningTime="2026-03-14 05:36:16.254996862 +0000 UTC m=+230.293257618" watchObservedRunningTime="2026-03-14 05:36:16.255642241 +0000 UTC m=+230.293902997" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.649387 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.649483 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.713728 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.850059 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:36:16 crc kubenswrapper[4817]: I0314 05:36:16.850132 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.149866 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.193100 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.200620 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvkz" event={"ID":"3bf969ab-d18a-43ef-88be-3e1337f14b4d","Type":"ContainerStarted","Data":"67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d"} Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.202305 4817 generic.go:334] "Generic (PLEG): container finished" podID="d94326da-6089-4fb4-be56-29635a38651f" containerID="6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a" exitCode=0 Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.202377 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerDied","Data":"6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a"} Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.204105 4817 generic.go:334] "Generic (PLEG): container finished" podID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerID="a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c" exitCode=0 Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.204241 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerDied","Data":"a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c"} Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.251951 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.263022 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mxvkz" podStartSLOduration=2.701428106 podStartE2EDuration="1m11.263004041s" podCreationTimestamp="2026-03-14 05:35:06 +0000 UTC" firstStartedPulling="2026-03-14 05:35:08.049352766 +0000 UTC m=+162.087613512" lastFinishedPulling="2026-03-14 05:36:16.610928701 +0000 UTC m=+230.649189447" observedRunningTime="2026-03-14 05:36:17.240521683 +0000 UTC m=+231.278782429" watchObservedRunningTime="2026-03-14 05:36:17.263004041 +0000 UTC m=+231.301264787" Mar 14 05:36:17 crc kubenswrapper[4817]: I0314 05:36:17.891245 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9p69t" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="registry-server" probeResult="failure" output=< Mar 14 05:36:17 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:36:17 crc kubenswrapper[4817]: > Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.212047 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerStarted","Data":"908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96"} Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.216252 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerStarted","Data":"62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610"} Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.230771 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cs45w" podStartSLOduration=3.946088618 podStartE2EDuration="1m9.230755272s" podCreationTimestamp="2026-03-14 05:35:09 +0000 UTC" firstStartedPulling="2026-03-14 05:35:12.407059618 +0000 UTC m=+166.445320364" lastFinishedPulling="2026-03-14 05:36:17.691726272 +0000 UTC m=+231.729987018" observedRunningTime="2026-03-14 05:36:18.228738673 +0000 UTC m=+232.266999419" watchObservedRunningTime="2026-03-14 05:36:18.230755272 +0000 UTC m=+232.269016018" Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.255301 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nwvm" podStartSLOduration=3.016845527 podStartE2EDuration="1m9.25528201s" podCreationTimestamp="2026-03-14 05:35:09 +0000 UTC" firstStartedPulling="2026-03-14 05:35:11.398215527 +0000 UTC m=+165.436476273" lastFinishedPulling="2026-03-14 05:36:17.63665201 +0000 UTC m=+231.674912756" observedRunningTime="2026-03-14 05:36:18.251074797 +0000 UTC m=+232.289335543" watchObservedRunningTime="2026-03-14 05:36:18.25528201 +0000 UTC m=+232.293542766" Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.626168 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.626534 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:36:18 crc kubenswrapper[4817]: I0314 05:36:18.666914 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:36:19 crc kubenswrapper[4817]: I0314 05:36:19.072867 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:36:19 crc kubenswrapper[4817]: I0314 05:36:19.073672 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:36:19 crc kubenswrapper[4817]: I0314 05:36:19.112141 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:36:19 crc kubenswrapper[4817]: I0314 05:36:19.268649 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:36:19 crc kubenswrapper[4817]: I0314 05:36:19.705306 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:36:19 crc kubenswrapper[4817]: I0314 05:36:19.705617 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:36:20 crc kubenswrapper[4817]: I0314 05:36:20.040651 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:36:20 crc kubenswrapper[4817]: I0314 05:36:20.041628 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:36:20 crc kubenswrapper[4817]: I0314 05:36:20.224722 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6v7h"] Mar 14 05:36:20 crc kubenswrapper[4817]: I0314 05:36:20.225013 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h6v7h" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="registry-server" containerID="cri-o://e7e15b68aef6edab58ae33b877b145b7eef8e3bdedd965d49939972de4b06e4b" gracePeriod=2 Mar 14 05:36:20 crc kubenswrapper[4817]: I0314 05:36:20.740542 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9nwvm" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="registry-server" probeResult="failure" output=< Mar 14 05:36:20 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:36:20 crc kubenswrapper[4817]: > Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.084098 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cs45w" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="registry-server" probeResult="failure" output=< Mar 14 05:36:21 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:36:21 crc kubenswrapper[4817]: > Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.235163 4817 generic.go:334] "Generic (PLEG): container finished" podID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerID="e7e15b68aef6edab58ae33b877b145b7eef8e3bdedd965d49939972de4b06e4b" exitCode=0 Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.235242 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerDied","Data":"e7e15b68aef6edab58ae33b877b145b7eef8e3bdedd965d49939972de4b06e4b"} Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.562386 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.620810 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdhv4\" (UniqueName: \"kubernetes.io/projected/213ab431-bf6b-41ea-8117-d3406d2b654a-kube-api-access-sdhv4\") pod \"213ab431-bf6b-41ea-8117-d3406d2b654a\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.620882 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-catalog-content\") pod \"213ab431-bf6b-41ea-8117-d3406d2b654a\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.620990 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-utilities\") pod \"213ab431-bf6b-41ea-8117-d3406d2b654a\" (UID: \"213ab431-bf6b-41ea-8117-d3406d2b654a\") " Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.621687 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-utilities" (OuterVolumeSpecName: "utilities") pod "213ab431-bf6b-41ea-8117-d3406d2b654a" (UID: "213ab431-bf6b-41ea-8117-d3406d2b654a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.626072 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213ab431-bf6b-41ea-8117-d3406d2b654a-kube-api-access-sdhv4" (OuterVolumeSpecName: "kube-api-access-sdhv4") pod "213ab431-bf6b-41ea-8117-d3406d2b654a" (UID: "213ab431-bf6b-41ea-8117-d3406d2b654a"). InnerVolumeSpecName "kube-api-access-sdhv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.680958 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "213ab431-bf6b-41ea-8117-d3406d2b654a" (UID: "213ab431-bf6b-41ea-8117-d3406d2b654a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.722302 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdhv4\" (UniqueName: \"kubernetes.io/projected/213ab431-bf6b-41ea-8117-d3406d2b654a-kube-api-access-sdhv4\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.722342 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:21 crc kubenswrapper[4817]: I0314 05:36:21.722356 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213ab431-bf6b-41ea-8117-d3406d2b654a-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.242657 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7h" event={"ID":"213ab431-bf6b-41ea-8117-d3406d2b654a","Type":"ContainerDied","Data":"35b0f57d003653ed6ecd559f52765aa8d33592760a9cf3ce53e933b36286cc37"} Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.242710 4817 scope.go:117] "RemoveContainer" containerID="e7e15b68aef6edab58ae33b877b145b7eef8e3bdedd965d49939972de4b06e4b" Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.242722 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7h" Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.267713 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6v7h"] Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.268293 4817 scope.go:117] "RemoveContainer" containerID="498ceaab4a4b89652fc8991b3335b88f5f1823e7f4ec0ffeaa61b548b962321a" Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.271604 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h6v7h"] Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.302195 4817 scope.go:117] "RemoveContainer" containerID="2e6310ad1d6db96cd44f8266baa01f0fd12796dce3973687bf7198745d26b873" Mar 14 05:36:22 crc kubenswrapper[4817]: I0314 05:36:22.754693 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" path="/var/lib/kubelet/pods/213ab431-bf6b-41ea-8117-d3406d2b654a/volumes" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.034372 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674c8cfddd-x6rtp"] Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.035099 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" podUID="1466c984-39e9-447b-99d9-9eb03c521496" containerName="controller-manager" containerID="cri-o://4e9d6475738a762655f6390cb8520b0935609002d2a64b4f6a11d379595d0e57" gracePeriod=30 Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.123406 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh"] Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.123675 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" podUID="f7285cb0-bab0-440c-9bfa-73030beeea10" containerName="route-controller-manager" containerID="cri-o://7265edbfbae46aa9bd92cb5236dc8caa41cf828391626a33c6a5e9e1ad9a1840" gracePeriod=30 Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.256012 4817 generic.go:334] "Generic (PLEG): container finished" podID="1466c984-39e9-447b-99d9-9eb03c521496" containerID="4e9d6475738a762655f6390cb8520b0935609002d2a64b4f6a11d379595d0e57" exitCode=0 Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.256084 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" event={"ID":"1466c984-39e9-447b-99d9-9eb03c521496","Type":"ContainerDied","Data":"4e9d6475738a762655f6390cb8520b0935609002d2a64b4f6a11d379595d0e57"} Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.263028 4817 generic.go:334] "Generic (PLEG): container finished" podID="f7285cb0-bab0-440c-9bfa-73030beeea10" containerID="7265edbfbae46aa9bd92cb5236dc8caa41cf828391626a33c6a5e9e1ad9a1840" exitCode=0 Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.263116 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" event={"ID":"f7285cb0-bab0-440c-9bfa-73030beeea10","Type":"ContainerDied","Data":"7265edbfbae46aa9bd92cb5236dc8caa41cf828391626a33c6a5e9e1ad9a1840"} Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.655455 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.662140 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779272 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-config\") pod \"f7285cb0-bab0-440c-9bfa-73030beeea10\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779323 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh97x\" (UniqueName: \"kubernetes.io/projected/f7285cb0-bab0-440c-9bfa-73030beeea10-kube-api-access-lh97x\") pod \"f7285cb0-bab0-440c-9bfa-73030beeea10\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779350 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1466c984-39e9-447b-99d9-9eb03c521496-serving-cert\") pod \"1466c984-39e9-447b-99d9-9eb03c521496\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779369 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-config\") pod \"1466c984-39e9-447b-99d9-9eb03c521496\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779408 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-proxy-ca-bundles\") pod \"1466c984-39e9-447b-99d9-9eb03c521496\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779436 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-client-ca\") pod \"1466c984-39e9-447b-99d9-9eb03c521496\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779453 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-client-ca\") pod \"f7285cb0-bab0-440c-9bfa-73030beeea10\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779470 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7285cb0-bab0-440c-9bfa-73030beeea10-serving-cert\") pod \"f7285cb0-bab0-440c-9bfa-73030beeea10\" (UID: \"f7285cb0-bab0-440c-9bfa-73030beeea10\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.779495 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbkw6\" (UniqueName: \"kubernetes.io/projected/1466c984-39e9-447b-99d9-9eb03c521496-kube-api-access-tbkw6\") pod \"1466c984-39e9-447b-99d9-9eb03c521496\" (UID: \"1466c984-39e9-447b-99d9-9eb03c521496\") " Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.780684 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1466c984-39e9-447b-99d9-9eb03c521496" (UID: "1466c984-39e9-447b-99d9-9eb03c521496"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.780708 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-config" (OuterVolumeSpecName: "config") pod "f7285cb0-bab0-440c-9bfa-73030beeea10" (UID: "f7285cb0-bab0-440c-9bfa-73030beeea10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.780814 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-client-ca" (OuterVolumeSpecName: "client-ca") pod "1466c984-39e9-447b-99d9-9eb03c521496" (UID: "1466c984-39e9-447b-99d9-9eb03c521496"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.781032 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-client-ca" (OuterVolumeSpecName: "client-ca") pod "f7285cb0-bab0-440c-9bfa-73030beeea10" (UID: "f7285cb0-bab0-440c-9bfa-73030beeea10"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.781171 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-config" (OuterVolumeSpecName: "config") pod "1466c984-39e9-447b-99d9-9eb03c521496" (UID: "1466c984-39e9-447b-99d9-9eb03c521496"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.784819 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1466c984-39e9-447b-99d9-9eb03c521496-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1466c984-39e9-447b-99d9-9eb03c521496" (UID: "1466c984-39e9-447b-99d9-9eb03c521496"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.784993 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1466c984-39e9-447b-99d9-9eb03c521496-kube-api-access-tbkw6" (OuterVolumeSpecName: "kube-api-access-tbkw6") pod "1466c984-39e9-447b-99d9-9eb03c521496" (UID: "1466c984-39e9-447b-99d9-9eb03c521496"). InnerVolumeSpecName "kube-api-access-tbkw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.785197 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7285cb0-bab0-440c-9bfa-73030beeea10-kube-api-access-lh97x" (OuterVolumeSpecName: "kube-api-access-lh97x") pod "f7285cb0-bab0-440c-9bfa-73030beeea10" (UID: "f7285cb0-bab0-440c-9bfa-73030beeea10"). InnerVolumeSpecName "kube-api-access-lh97x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.787999 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7285cb0-bab0-440c-9bfa-73030beeea10-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f7285cb0-bab0-440c-9bfa-73030beeea10" (UID: "f7285cb0-bab0-440c-9bfa-73030beeea10"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881087 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881135 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881149 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881160 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7285cb0-bab0-440c-9bfa-73030beeea10-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881173 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbkw6\" (UniqueName: \"kubernetes.io/projected/1466c984-39e9-447b-99d9-9eb03c521496-kube-api-access-tbkw6\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881187 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7285cb0-bab0-440c-9bfa-73030beeea10-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881198 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh97x\" (UniqueName: \"kubernetes.io/projected/f7285cb0-bab0-440c-9bfa-73030beeea10-kube-api-access-lh97x\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881210 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1466c984-39e9-447b-99d9-9eb03c521496-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:23 crc kubenswrapper[4817]: I0314 05:36:23.881220 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1466c984-39e9-447b-99d9-9eb03c521496-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259337 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-669b45cd95-xn27j"] Mar 14 05:36:24 crc kubenswrapper[4817]: E0314 05:36:24.259699 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7285cb0-bab0-440c-9bfa-73030beeea10" containerName="route-controller-manager" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259715 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7285cb0-bab0-440c-9bfa-73030beeea10" containerName="route-controller-manager" Mar 14 05:36:24 crc kubenswrapper[4817]: E0314 05:36:24.259730 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1466c984-39e9-447b-99d9-9eb03c521496" containerName="controller-manager" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259738 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1466c984-39e9-447b-99d9-9eb03c521496" containerName="controller-manager" Mar 14 05:36:24 crc kubenswrapper[4817]: E0314 05:36:24.259751 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53bc251-d4cf-4e2c-b731-84eed88c78af" containerName="oc" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259760 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53bc251-d4cf-4e2c-b731-84eed88c78af" containerName="oc" Mar 14 05:36:24 crc kubenswrapper[4817]: E0314 05:36:24.259771 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="extract-utilities" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259779 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="extract-utilities" Mar 14 05:36:24 crc kubenswrapper[4817]: E0314 05:36:24.259794 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="extract-content" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259802 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="extract-content" Mar 14 05:36:24 crc kubenswrapper[4817]: E0314 05:36:24.259811 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="registry-server" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259818 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="registry-server" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259961 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1466c984-39e9-447b-99d9-9eb03c521496" containerName="controller-manager" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259973 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7285cb0-bab0-440c-9bfa-73030beeea10" containerName="route-controller-manager" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259983 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53bc251-d4cf-4e2c-b731-84eed88c78af" containerName="oc" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.259995 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="213ab431-bf6b-41ea-8117-d3406d2b654a" containerName="registry-server" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.260452 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.262981 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.263562 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.272867 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-669b45cd95-xn27j"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.274912 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" event={"ID":"f7285cb0-bab0-440c-9bfa-73030beeea10","Type":"ContainerDied","Data":"fed84654abd98d0cc5324852515ce8db24123670a2751de806be30b3d327b802"} Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.274966 4817 scope.go:117] "RemoveContainer" containerID="7265edbfbae46aa9bd92cb5236dc8caa41cf828391626a33c6a5e9e1ad9a1840" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.274931 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.277628 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.277605 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674c8cfddd-x6rtp" event={"ID":"1466c984-39e9-447b-99d9-9eb03c521496","Type":"ContainerDied","Data":"9dac33a86f19bc50f0b8b9880eaf82b6aecacf31b63c5389b19d4a95938981d2"} Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.300886 4817 scope.go:117] "RemoveContainer" containerID="4e9d6475738a762655f6390cb8520b0935609002d2a64b4f6a11d379595d0e57" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.320827 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.355923 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674c8cfddd-x6rtp"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.359815 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-674c8cfddd-x6rtp"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.371222 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.376408 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7846d5d576-9s7mh"] Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390678 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92005e43-d016-41d7-90c0-670bad0588d0-serving-cert\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390736 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5fw\" (UniqueName: \"kubernetes.io/projected/3c6b4546-90e1-4ce1-8f15-394413dcfabc-kube-api-access-cv5fw\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390767 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-client-ca\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390831 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-config\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390857 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jttpm\" (UniqueName: \"kubernetes.io/projected/92005e43-d016-41d7-90c0-670bad0588d0-kube-api-access-jttpm\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390876 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-config\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390903 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6b4546-90e1-4ce1-8f15-394413dcfabc-serving-cert\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.390973 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-proxy-ca-bundles\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.391044 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-client-ca\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492218 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92005e43-d016-41d7-90c0-670bad0588d0-serving-cert\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492295 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5fw\" (UniqueName: \"kubernetes.io/projected/3c6b4546-90e1-4ce1-8f15-394413dcfabc-kube-api-access-cv5fw\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492328 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-client-ca\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492364 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-config\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492402 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jttpm\" (UniqueName: \"kubernetes.io/projected/92005e43-d016-41d7-90c0-670bad0588d0-kube-api-access-jttpm\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492427 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-config\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.492445 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6b4546-90e1-4ce1-8f15-394413dcfabc-serving-cert\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.493590 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-client-ca\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.493870 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-config\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.493948 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-proxy-ca-bundles\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.493986 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-client-ca\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.494571 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-proxy-ca-bundles\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.494706 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-client-ca\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.494883 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-config\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.497577 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92005e43-d016-41d7-90c0-670bad0588d0-serving-cert\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.497631 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6b4546-90e1-4ce1-8f15-394413dcfabc-serving-cert\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.508763 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jttpm\" (UniqueName: \"kubernetes.io/projected/92005e43-d016-41d7-90c0-670bad0588d0-kube-api-access-jttpm\") pod \"route-controller-manager-6d64f9b597-mvjmg\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.514736 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5fw\" (UniqueName: \"kubernetes.io/projected/3c6b4546-90e1-4ce1-8f15-394413dcfabc-kube-api-access-cv5fw\") pod \"controller-manager-669b45cd95-xn27j\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.584047 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.621615 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.743129 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1466c984-39e9-447b-99d9-9eb03c521496" path="/var/lib/kubelet/pods/1466c984-39e9-447b-99d9-9eb03c521496/volumes" Mar 14 05:36:24 crc kubenswrapper[4817]: I0314 05:36:24.747015 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7285cb0-bab0-440c-9bfa-73030beeea10" path="/var/lib/kubelet/pods/f7285cb0-bab0-440c-9bfa-73030beeea10/volumes" Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.027117 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-669b45cd95-xn27j"] Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.096315 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg"] Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.286296 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" event={"ID":"3c6b4546-90e1-4ce1-8f15-394413dcfabc","Type":"ContainerStarted","Data":"60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3"} Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.286360 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" event={"ID":"3c6b4546-90e1-4ce1-8f15-394413dcfabc","Type":"ContainerStarted","Data":"ea782bc5044453e2ffa0340c7d7b08163616e133b4741f1f0ed9afdb8bd18de8"} Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.286607 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.289539 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" event={"ID":"92005e43-d016-41d7-90c0-670bad0588d0","Type":"ContainerStarted","Data":"f91e6970c5e9d258394b98977fcc968c3ea9d45ba3c110627e12f55def2a53a0"} Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.289592 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" event={"ID":"92005e43-d016-41d7-90c0-670bad0588d0","Type":"ContainerStarted","Data":"bd8cccaefc099309f08e2716047ab2aba51df1f8cff8231ec5255436f72f0e4e"} Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.289802 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.290439 4817 patch_prober.go:28] interesting pod/controller-manager-669b45cd95-xn27j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.290483 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" podUID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.292073 4817 patch_prober.go:28] interesting pod/route-controller-manager-6d64f9b597-mvjmg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.292262 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" podUID="92005e43-d016-41d7-90c0-670bad0588d0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.310440 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" podStartSLOduration=2.31039812 podStartE2EDuration="2.31039812s" podCreationTimestamp="2026-03-14 05:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:36:25.303319102 +0000 UTC m=+239.341579848" watchObservedRunningTime="2026-03-14 05:36:25.31039812 +0000 UTC m=+239.348658866" Mar 14 05:36:25 crc kubenswrapper[4817]: I0314 05:36:25.331720 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" podStartSLOduration=2.331694963 podStartE2EDuration="2.331694963s" podCreationTimestamp="2026-03-14 05:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:36:25.328924282 +0000 UTC m=+239.367185038" watchObservedRunningTime="2026-03-14 05:36:25.331694963 +0000 UTC m=+239.369955709" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.302269 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.304603 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.462297 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.462720 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.509090 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.895440 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:36:26 crc kubenswrapper[4817]: I0314 05:36:26.945246 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:36:27 crc kubenswrapper[4817]: I0314 05:36:27.351274 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:36:27 crc kubenswrapper[4817]: I0314 05:36:27.993944 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5mbx5"] Mar 14 05:36:29 crc kubenswrapper[4817]: I0314 05:36:29.024660 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9p69t"] Mar 14 05:36:29 crc kubenswrapper[4817]: I0314 05:36:29.025053 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9p69t" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="registry-server" containerID="cri-o://96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820" gracePeriod=2 Mar 14 05:36:29 crc kubenswrapper[4817]: I0314 05:36:29.117762 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:36:29 crc kubenswrapper[4817]: I0314 05:36:29.743591 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:36:29 crc kubenswrapper[4817]: I0314 05:36:29.793210 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.083186 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.096754 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.136403 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.283797 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-catalog-content\") pod \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.283999 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vc68\" (UniqueName: \"kubernetes.io/projected/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-kube-api-access-2vc68\") pod \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.284051 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-utilities\") pod \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\" (UID: \"2c5f53ee-2afc-4fe8-a17c-10c9808edac2\") " Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.285113 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-utilities" (OuterVolumeSpecName: "utilities") pod "2c5f53ee-2afc-4fe8-a17c-10c9808edac2" (UID: "2c5f53ee-2afc-4fe8-a17c-10c9808edac2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.297097 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-kube-api-access-2vc68" (OuterVolumeSpecName: "kube-api-access-2vc68") pod "2c5f53ee-2afc-4fe8-a17c-10c9808edac2" (UID: "2c5f53ee-2afc-4fe8-a17c-10c9808edac2"). InnerVolumeSpecName "kube-api-access-2vc68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.332182 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c5f53ee-2afc-4fe8-a17c-10c9808edac2" (UID: "2c5f53ee-2afc-4fe8-a17c-10c9808edac2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.349286 4817 generic.go:334] "Generic (PLEG): container finished" podID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerID="96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820" exitCode=0 Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.349368 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p69t" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.349378 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p69t" event={"ID":"2c5f53ee-2afc-4fe8-a17c-10c9808edac2","Type":"ContainerDied","Data":"96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820"} Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.349458 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p69t" event={"ID":"2c5f53ee-2afc-4fe8-a17c-10c9808edac2","Type":"ContainerDied","Data":"2d4e9184854d8f85636475d13a39cf5c0a8a5b92085cf5bfe9c48f0dfd33f305"} Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.349488 4817 scope.go:117] "RemoveContainer" containerID="96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.373842 4817 scope.go:117] "RemoveContainer" containerID="44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.387344 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.387377 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vc68\" (UniqueName: \"kubernetes.io/projected/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-kube-api-access-2vc68\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.387386 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5f53ee-2afc-4fe8-a17c-10c9808edac2-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.392088 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9p69t"] Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.395658 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9p69t"] Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.403653 4817 scope.go:117] "RemoveContainer" containerID="1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.427245 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24f8l"] Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.427592 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-24f8l" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="registry-server" containerID="cri-o://d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481" gracePeriod=2 Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.437905 4817 scope.go:117] "RemoveContainer" containerID="96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820" Mar 14 05:36:30 crc kubenswrapper[4817]: E0314 05:36:30.438465 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820\": container with ID starting with 96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820 not found: ID does not exist" containerID="96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.438497 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820"} err="failed to get container status \"96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820\": rpc error: code = NotFound desc = could not find container \"96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820\": container with ID starting with 96f8037a2fe92f231c06e11b69f9f086c24bd1afeff2c304c7a88a2798f6a820 not found: ID does not exist" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.438528 4817 scope.go:117] "RemoveContainer" containerID="44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b" Mar 14 05:36:30 crc kubenswrapper[4817]: E0314 05:36:30.438999 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b\": container with ID starting with 44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b not found: ID does not exist" containerID="44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.439119 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b"} err="failed to get container status \"44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b\": rpc error: code = NotFound desc = could not find container \"44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b\": container with ID starting with 44f1ad9b45c03e0c83ea725f97916314c130cdc5995903c2e368c8b3332a564b not found: ID does not exist" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.439220 4817 scope.go:117] "RemoveContainer" containerID="1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a" Mar 14 05:36:30 crc kubenswrapper[4817]: E0314 05:36:30.439664 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a\": container with ID starting with 1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a not found: ID does not exist" containerID="1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.439747 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a"} err="failed to get container status \"1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a\": rpc error: code = NotFound desc = could not find container \"1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a\": container with ID starting with 1c969de5157600d73e6c593055835ce6b33d00112642060d7d0113c7ad93bf6a not found: ID does not exist" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.737220 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" path="/var/lib/kubelet/pods/2c5f53ee-2afc-4fe8-a17c-10c9808edac2/volumes" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.857950 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.994690 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-utilities\") pod \"655a63e0-d806-4b09-a33f-aef9c8c58b54\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.994764 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwfdp\" (UniqueName: \"kubernetes.io/projected/655a63e0-d806-4b09-a33f-aef9c8c58b54-kube-api-access-mwfdp\") pod \"655a63e0-d806-4b09-a33f-aef9c8c58b54\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.994786 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-catalog-content\") pod \"655a63e0-d806-4b09-a33f-aef9c8c58b54\" (UID: \"655a63e0-d806-4b09-a33f-aef9c8c58b54\") " Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.995584 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-utilities" (OuterVolumeSpecName: "utilities") pod "655a63e0-d806-4b09-a33f-aef9c8c58b54" (UID: "655a63e0-d806-4b09-a33f-aef9c8c58b54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.997311 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:30 crc kubenswrapper[4817]: I0314 05:36:30.998524 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655a63e0-d806-4b09-a33f-aef9c8c58b54-kube-api-access-mwfdp" (OuterVolumeSpecName: "kube-api-access-mwfdp") pod "655a63e0-d806-4b09-a33f-aef9c8c58b54" (UID: "655a63e0-d806-4b09-a33f-aef9c8c58b54"). InnerVolumeSpecName "kube-api-access-mwfdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.023302 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "655a63e0-d806-4b09-a33f-aef9c8c58b54" (UID: "655a63e0-d806-4b09-a33f-aef9c8c58b54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.098407 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwfdp\" (UniqueName: \"kubernetes.io/projected/655a63e0-d806-4b09-a33f-aef9c8c58b54-kube-api-access-mwfdp\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.098445 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655a63e0-d806-4b09-a33f-aef9c8c58b54-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.359147 4817 generic.go:334] "Generic (PLEG): container finished" podID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerID="d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481" exitCode=0 Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.359247 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24f8l" event={"ID":"655a63e0-d806-4b09-a33f-aef9c8c58b54","Type":"ContainerDied","Data":"d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481"} Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.359293 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24f8l" event={"ID":"655a63e0-d806-4b09-a33f-aef9c8c58b54","Type":"ContainerDied","Data":"6fd8b765096597d6b4f83a5751db41b82cd0f4ee5b532a8f573f96eced12e110"} Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.359312 4817 scope.go:117] "RemoveContainer" containerID="d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.359307 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24f8l" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.378536 4817 scope.go:117] "RemoveContainer" containerID="97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.404805 4817 scope.go:117] "RemoveContainer" containerID="a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.417386 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24f8l"] Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.421419 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-24f8l"] Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.427806 4817 scope.go:117] "RemoveContainer" containerID="d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.428252 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481\": container with ID starting with d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481 not found: ID does not exist" containerID="d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.428295 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481"} err="failed to get container status \"d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481\": rpc error: code = NotFound desc = could not find container \"d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481\": container with ID starting with d39a9a6991e029f125ad2737d7de03b5740447adcab6d377618bd70132c9a481 not found: ID does not exist" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.428319 4817 scope.go:117] "RemoveContainer" containerID="97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.428558 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e\": container with ID starting with 97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e not found: ID does not exist" containerID="97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.428619 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e"} err="failed to get container status \"97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e\": rpc error: code = NotFound desc = could not find container \"97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e\": container with ID starting with 97ef26ddcb8d2e56d1836926ffae3ce710212377f8bf7c6cd2f2f454c7af516e not found: ID does not exist" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.428669 4817 scope.go:117] "RemoveContainer" containerID="a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.429030 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab\": container with ID starting with a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab not found: ID does not exist" containerID="a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.429063 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab"} err="failed to get container status \"a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab\": rpc error: code = NotFound desc = could not find container \"a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab\": container with ID starting with a29f9c10cfa3b4d5c3f572eb86ee7ad983325dbe0e41df46c1b6c9dc98400eab not found: ID does not exist" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747502 4817 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.747767 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="registry-server" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747787 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="registry-server" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.747803 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="extract-content" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747812 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="extract-content" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.747864 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="registry-server" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747873 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="registry-server" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.747885 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="extract-utilities" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747911 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="extract-utilities" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.747922 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="extract-utilities" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747930 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="extract-utilities" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.747949 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="extract-content" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.747958 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="extract-content" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.748098 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5f53ee-2afc-4fe8-a17c-10c9808edac2" containerName="registry-server" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.748112 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" containerName="registry-server" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.748478 4817 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.748754 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285" gracePeriod=15 Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.748929 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.749300 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055" gracePeriod=15 Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.749380 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7" gracePeriod=15 Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.749497 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed" gracePeriod=15 Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.749520 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61" gracePeriod=15 Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750056 4817 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750196 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750206 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750217 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750226 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750238 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750247 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750256 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750264 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750274 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750281 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750294 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750302 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750310 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750318 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750328 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750335 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750345 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750352 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750475 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750487 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750497 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750505 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750519 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750528 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750539 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750551 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.750667 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750676 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.750781 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 05:36:31 crc kubenswrapper[4817]: E0314 05:36:31.835485 4817 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.29:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.914840 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915256 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915351 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915399 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915460 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915495 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915527 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:31 crc kubenswrapper[4817]: I0314 05:36:31.915627 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.016803 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.016937 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.016954 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.016995 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017467 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017496 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017628 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017688 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017747 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017791 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017826 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017855 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017829 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017870 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017876 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.017915 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.136203 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:32 crc kubenswrapper[4817]: W0314 05:36:32.156231 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8a520f8fe88db04d2cffa8490022529222bc2a0c061b26129c060dea367d1d9c WatchSource:0}: Error finding container 8a520f8fe88db04d2cffa8490022529222bc2a0c061b26129c060dea367d1d9c: Status 404 returned error can't find the container with id 8a520f8fe88db04d2cffa8490022529222bc2a0c061b26129c060dea367d1d9c Mar 14 05:36:32 crc kubenswrapper[4817]: E0314 05:36:32.159210 4817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.29:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c9e7826a45091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:36:32.158699665 +0000 UTC m=+246.196960411,LastTimestamp:2026-03-14 05:36:32.158699665 +0000 UTC m=+246.196960411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.374081 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.375378 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.376119 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055" exitCode=0 Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.376146 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61" exitCode=0 Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.376154 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed" exitCode=0 Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.376166 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7" exitCode=2 Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.376222 4817 scope.go:117] "RemoveContainer" containerID="c25547d32abb89d0625807ad85c62d49c0f3c9e03ff8f592109391e4eadc3d06" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.380044 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8a520f8fe88db04d2cffa8490022529222bc2a0c061b26129c060dea367d1d9c"} Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.381493 4817 generic.go:334] "Generic (PLEG): container finished" podID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" containerID="7f2843807302d3d8313a9a1244abe452999cec00c12125ccc8b8f592bda5088f" exitCode=0 Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.381546 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb01e662-5d03-4606-bc4e-709eb9e76cd4","Type":"ContainerDied","Data":"7f2843807302d3d8313a9a1244abe452999cec00c12125ccc8b8f592bda5088f"} Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.382884 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.384044 4817 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:32 crc kubenswrapper[4817]: I0314 05:36:32.742947 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655a63e0-d806-4b09-a33f-aef9c8c58b54" path="/var/lib/kubelet/pods/655a63e0-d806-4b09-a33f-aef9c8c58b54/volumes" Mar 14 05:36:32 crc kubenswrapper[4817]: E0314 05:36:32.829047 4817 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.29:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" volumeName="registry-storage" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.389625 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038"} Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.390421 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:33 crc kubenswrapper[4817]: E0314 05:36:33.390840 4817 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.29:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.391928 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.770935 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.771446 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.848877 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-var-lock\") pod \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.849087 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kubelet-dir\") pod \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.849197 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kube-api-access\") pod \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\" (UID: \"fb01e662-5d03-4606-bc4e-709eb9e76cd4\") " Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.851169 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb01e662-5d03-4606-bc4e-709eb9e76cd4" (UID: "fb01e662-5d03-4606-bc4e-709eb9e76cd4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.851169 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-var-lock" (OuterVolumeSpecName: "var-lock") pod "fb01e662-5d03-4606-bc4e-709eb9e76cd4" (UID: "fb01e662-5d03-4606-bc4e-709eb9e76cd4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.857571 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb01e662-5d03-4606-bc4e-709eb9e76cd4" (UID: "fb01e662-5d03-4606-bc4e-709eb9e76cd4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.950576 4817 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.950620 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb01e662-5d03-4606-bc4e-709eb9e76cd4-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:33 crc kubenswrapper[4817]: I0314 05:36:33.950635 4817 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fb01e662-5d03-4606-bc4e-709eb9e76cd4-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.113491 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.114617 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.115706 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.115952 4817 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.152531 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.152675 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.152956 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.152981 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.153037 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.153066 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.153140 4817 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.153151 4817 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.153160 4817 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.398886 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.399580 4817 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285" exitCode=0 Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.399641 4817 scope.go:117] "RemoveContainer" containerID="b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.399723 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.401415 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fb01e662-5d03-4606-bc4e-709eb9e76cd4","Type":"ContainerDied","Data":"cb16c466b0c00cb02c693c4f101822f81c27ed5c79151e10ffd7110377c21d6a"} Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.401453 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb16c466b0c00cb02c693c4f101822f81c27ed5c79151e10ffd7110377c21d6a" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.401464 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.402041 4817 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.29:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.414436 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.414811 4817 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.418083 4817 scope.go:117] "RemoveContainer" containerID="00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.427706 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.428300 4817 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.431183 4817 scope.go:117] "RemoveContainer" containerID="13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.449952 4817 scope.go:117] "RemoveContainer" containerID="9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.469344 4817 scope.go:117] "RemoveContainer" containerID="f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.486397 4817 scope.go:117] "RemoveContainer" containerID="ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.502741 4817 scope.go:117] "RemoveContainer" containerID="b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.503047 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\": container with ID starting with b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055 not found: ID does not exist" containerID="b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.503086 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055"} err="failed to get container status \"b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\": rpc error: code = NotFound desc = could not find container \"b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055\": container with ID starting with b0fd1d33d37a87c163d45b9e07e322b409307ad0105b036758e18d0e6600c055 not found: ID does not exist" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.503108 4817 scope.go:117] "RemoveContainer" containerID="00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.503332 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\": container with ID starting with 00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61 not found: ID does not exist" containerID="00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.503358 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61"} err="failed to get container status \"00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\": rpc error: code = NotFound desc = could not find container \"00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61\": container with ID starting with 00b84b6b80c2286ac7f5df8ab194356e0bbb43ba19dcbe71499ea95cc503eb61 not found: ID does not exist" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.503376 4817 scope.go:117] "RemoveContainer" containerID="13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.503658 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\": container with ID starting with 13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed not found: ID does not exist" containerID="13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.503689 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed"} err="failed to get container status \"13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\": rpc error: code = NotFound desc = could not find container \"13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed\": container with ID starting with 13a4031502bf5a42c60a191c6cd43b5324a2a955ae598e75a219aeeac1d776ed not found: ID does not exist" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.503704 4817 scope.go:117] "RemoveContainer" containerID="9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.503997 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\": container with ID starting with 9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7 not found: ID does not exist" containerID="9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.504040 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7"} err="failed to get container status \"9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\": rpc error: code = NotFound desc = could not find container \"9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7\": container with ID starting with 9fecfc8a50fc845f3323c53d880e1a14ba42be3c7835602f2ab97c681ccfc0e7 not found: ID does not exist" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.504075 4817 scope.go:117] "RemoveContainer" containerID="f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.504502 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\": container with ID starting with f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285 not found: ID does not exist" containerID="f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.504537 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285"} err="failed to get container status \"f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\": rpc error: code = NotFound desc = could not find container \"f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285\": container with ID starting with f5f9bb118482770375b9b7610f299f1ad3068b55f101c69d393a7740f68c8285 not found: ID does not exist" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.504557 4817 scope.go:117] "RemoveContainer" containerID="ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce" Mar 14 05:36:34 crc kubenswrapper[4817]: E0314 05:36:34.504817 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\": container with ID starting with ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce not found: ID does not exist" containerID="ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.504842 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce"} err="failed to get container status \"ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\": rpc error: code = NotFound desc = could not find container \"ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce\": container with ID starting with ac939136fdcba9332a1d671c874f5de6f09be500c2f3c5a6b7bd6f23995a88ce not found: ID does not exist" Mar 14 05:36:34 crc kubenswrapper[4817]: I0314 05:36:34.750295 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 14 05:36:36 crc kubenswrapper[4817]: I0314 05:36:36.735310 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.142122 4817 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.144430 4817 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.145035 4817 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.145370 4817 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.145700 4817 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:37 crc kubenswrapper[4817]: I0314 05:36:37.145730 4817 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.146059 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="200ms" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.347098 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="400ms" Mar 14 05:36:37 crc kubenswrapper[4817]: E0314 05:36:37.748102 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="800ms" Mar 14 05:36:38 crc kubenswrapper[4817]: I0314 05:36:38.604688 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:36:38 crc kubenswrapper[4817]: I0314 05:36:38.604776 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:36:38 crc kubenswrapper[4817]: E0314 05:36:38.605208 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="1.6s" Mar 14 05:36:38 crc kubenswrapper[4817]: E0314 05:36:38.614279 4817 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.29:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189c9e7826a45091 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 05:36:32.158699665 +0000 UTC m=+246.196960411,LastTimestamp:2026-03-14 05:36:32.158699665 +0000 UTC m=+246.196960411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 05:36:40 crc kubenswrapper[4817]: E0314 05:36:40.206072 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="3.2s" Mar 14 05:36:42 crc kubenswrapper[4817]: I0314 05:36:42.731001 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:42 crc kubenswrapper[4817]: I0314 05:36:42.732859 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:42 crc kubenswrapper[4817]: I0314 05:36:42.744423 4817 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:42 crc kubenswrapper[4817]: I0314 05:36:42.744458 4817 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:42 crc kubenswrapper[4817]: E0314 05:36:42.745043 4817 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:42 crc kubenswrapper[4817]: I0314 05:36:42.745785 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:42 crc kubenswrapper[4817]: W0314 05:36:42.767780 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-26952962077c878e02340b01f0ddd372e1947969d975126b4e422e6a1bb92cc5 WatchSource:0}: Error finding container 26952962077c878e02340b01f0ddd372e1947969d975126b4e422e6a1bb92cc5: Status 404 returned error can't find the container with id 26952962077c878e02340b01f0ddd372e1947969d975126b4e422e6a1bb92cc5 Mar 14 05:36:43 crc kubenswrapper[4817]: E0314 05:36:43.407970 4817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.29:6443: connect: connection refused" interval="6.4s" Mar 14 05:36:43 crc kubenswrapper[4817]: I0314 05:36:43.636521 4817 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="aef62949c2a6f1d561040f465f161d76786485c29f5b9e85e2262f04bc190e2e" exitCode=0 Mar 14 05:36:43 crc kubenswrapper[4817]: I0314 05:36:43.636564 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"aef62949c2a6f1d561040f465f161d76786485c29f5b9e85e2262f04bc190e2e"} Mar 14 05:36:43 crc kubenswrapper[4817]: I0314 05:36:43.636594 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26952962077c878e02340b01f0ddd372e1947969d975126b4e422e6a1bb92cc5"} Mar 14 05:36:43 crc kubenswrapper[4817]: I0314 05:36:43.636853 4817 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:43 crc kubenswrapper[4817]: I0314 05:36:43.636871 4817 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:43 crc kubenswrapper[4817]: E0314 05:36:43.637230 4817 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:43 crc kubenswrapper[4817]: I0314 05:36:43.637241 4817 status_manager.go:851] "Failed to get status for pod" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.29:6443: connect: connection refused" Mar 14 05:36:44 crc kubenswrapper[4817]: I0314 05:36:44.646719 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"42b4436a152f2cef65867d0d01462b9dda15465416d8a89d4765c1cc5a0f8fda"} Mar 14 05:36:44 crc kubenswrapper[4817]: I0314 05:36:44.647009 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4e6a62c4e7e3fbdcfeb75b59eb5f9987dcc18dfbf0f7b854a31793205ccd705c"} Mar 14 05:36:44 crc kubenswrapper[4817]: I0314 05:36:44.647022 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bdc3a950b50143d139484ceed05aae6a8b074569e69adff78cd75b64e7faaf9b"} Mar 14 05:36:44 crc kubenswrapper[4817]: I0314 05:36:44.647032 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e1943e75a3db14f32484c05cc87e69654a23155a118bbbddbccdcd01d1a8ad5e"} Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.662349 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.663758 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.663819 4817 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="16022c27fbbc60a8c0c33d90ddfded55d22accd6833b1494bf1c538e303d3cff" exitCode=1 Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.663914 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"16022c27fbbc60a8c0c33d90ddfded55d22accd6833b1494bf1c538e303d3cff"} Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.664432 4817 scope.go:117] "RemoveContainer" containerID="16022c27fbbc60a8c0c33d90ddfded55d22accd6833b1494bf1c538e303d3cff" Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.668172 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"04b35b130a3147c7612a4c37cef9422c82f5864bee1fa2b229cf681ac8919f56"} Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.668340 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.668412 4817 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:45 crc kubenswrapper[4817]: I0314 05:36:45.668434 4817 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:46 crc kubenswrapper[4817]: I0314 05:36:46.295520 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:36:46 crc kubenswrapper[4817]: I0314 05:36:46.677122 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 05:36:46 crc kubenswrapper[4817]: I0314 05:36:46.679119 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 05:36:46 crc kubenswrapper[4817]: I0314 05:36:46.679189 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6ed00e7dbcfd766d49ba8d41b9feef1c11354dd0048637089172854269a5487"} Mar 14 05:36:47 crc kubenswrapper[4817]: I0314 05:36:47.746177 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:47 crc kubenswrapper[4817]: I0314 05:36:47.746256 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:47 crc kubenswrapper[4817]: I0314 05:36:47.756059 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:50 crc kubenswrapper[4817]: I0314 05:36:50.677348 4817 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:50 crc kubenswrapper[4817]: I0314 05:36:50.756504 4817 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4b559904-6316-409b-8968-0305be939413" Mar 14 05:36:51 crc kubenswrapper[4817]: I0314 05:36:51.708443 4817 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:51 crc kubenswrapper[4817]: I0314 05:36:51.708737 4817 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:51 crc kubenswrapper[4817]: I0314 05:36:51.711958 4817 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4b559904-6316-409b-8968-0305be939413" Mar 14 05:36:51 crc kubenswrapper[4817]: I0314 05:36:51.712098 4817 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://e1943e75a3db14f32484c05cc87e69654a23155a118bbbddbccdcd01d1a8ad5e" Mar 14 05:36:51 crc kubenswrapper[4817]: I0314 05:36:51.712116 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:36:52 crc kubenswrapper[4817]: I0314 05:36:52.716018 4817 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:52 crc kubenswrapper[4817]: I0314 05:36:52.716065 4817 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:36:52 crc kubenswrapper[4817]: I0314 05:36:52.720452 4817 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4b559904-6316-409b-8968-0305be939413" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.018925 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerName="oauth-openshift" containerID="cri-o://660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932" gracePeriod=15 Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.287125 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.553027 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.612116 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-idp-0-file-data\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.612308 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xd68\" (UniqueName: \"kubernetes.io/projected/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-kube-api-access-5xd68\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.612378 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-policies\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.612414 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-serving-cert\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.612443 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-router-certs\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.613180 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.613660 4817 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.619146 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-kube-api-access-5xd68" (OuterVolumeSpecName: "kube-api-access-5xd68") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "kube-api-access-5xd68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.619294 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.619597 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.621355 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.713992 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-ocp-branding-template\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714034 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-provider-selection\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714093 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-trusted-ca-bundle\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714110 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-service-ca\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714140 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-error\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714160 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-session\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714176 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-login\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714204 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-dir\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714227 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-cliconfig\") pod \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\" (UID: \"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4\") " Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714362 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714373 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714382 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xd68\" (UniqueName: \"kubernetes.io/projected/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-kube-api-access-5xd68\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714392 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714585 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.714641 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.715201 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.715221 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.716933 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.718550 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.719084 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.720450 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.720452 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" (UID: "9ef15d93-0fd9-4fc8-8f61-a29eaca479a4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.723142 4817 generic.go:334] "Generic (PLEG): container finished" podID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerID="660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932" exitCode=0 Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.723178 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" event={"ID":"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4","Type":"ContainerDied","Data":"660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932"} Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.723208 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" event={"ID":"9ef15d93-0fd9-4fc8-8f61-a29eaca479a4","Type":"ContainerDied","Data":"f11fe5e1508b71d49e43f647d233538d1c7129eb36c18c0737b9cc08f2d7dd57"} Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.723210 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5mbx5" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.723242 4817 scope.go:117] "RemoveContainer" containerID="660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.747088 4817 scope.go:117] "RemoveContainer" containerID="660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932" Mar 14 05:36:53 crc kubenswrapper[4817]: E0314 05:36:53.747870 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932\": container with ID starting with 660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932 not found: ID does not exist" containerID="660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.747922 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932"} err="failed to get container status \"660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932\": rpc error: code = NotFound desc = could not find container \"660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932\": container with ID starting with 660b7b9b7d4a81d1dfcdd6211b648bfc24b70be8a0320ab30fc37a5bf1d51932 not found: ID does not exist" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815539 4817 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815567 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815578 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815591 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815602 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815611 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815620 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815629 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:53 crc kubenswrapper[4817]: I0314 05:36:53.815638 4817 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 05:36:56 crc kubenswrapper[4817]: I0314 05:36:56.295332 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:36:56 crc kubenswrapper[4817]: I0314 05:36:56.303265 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:36:56 crc kubenswrapper[4817]: I0314 05:36:56.756445 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 05:36:58 crc kubenswrapper[4817]: I0314 05:36:58.311526 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 05:36:58 crc kubenswrapper[4817]: I0314 05:36:58.435585 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 05:36:59 crc kubenswrapper[4817]: I0314 05:36:59.120538 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 05:36:59 crc kubenswrapper[4817]: I0314 05:36:59.798452 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 05:36:59 crc kubenswrapper[4817]: I0314 05:36:59.921808 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 05:37:00 crc kubenswrapper[4817]: I0314 05:37:00.057885 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 05:37:00 crc kubenswrapper[4817]: I0314 05:37:00.763570 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 05:37:00 crc kubenswrapper[4817]: I0314 05:37:00.835979 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 05:37:01 crc kubenswrapper[4817]: I0314 05:37:01.245922 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 05:37:01 crc kubenswrapper[4817]: I0314 05:37:01.462928 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 05:37:01 crc kubenswrapper[4817]: I0314 05:37:01.465371 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 05:37:01 crc kubenswrapper[4817]: I0314 05:37:01.478942 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 05:37:01 crc kubenswrapper[4817]: I0314 05:37:01.687100 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 05:37:01 crc kubenswrapper[4817]: I0314 05:37:01.755948 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 05:37:02 crc kubenswrapper[4817]: I0314 05:37:02.609460 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 05:37:02 crc kubenswrapper[4817]: I0314 05:37:02.833042 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.204257 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.305394 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.330681 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.387695 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.405567 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.426971 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.586329 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.640041 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 05:37:03 crc kubenswrapper[4817]: I0314 05:37:03.760736 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.066165 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.312874 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.434299 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.458562 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.721171 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.747887 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 05:37:04 crc kubenswrapper[4817]: I0314 05:37:04.866863 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.051109 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.076186 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.167522 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.173265 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.265787 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.312144 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.366305 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.380720 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.400608 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.415065 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.625014 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.665195 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.699637 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 05:37:05 crc kubenswrapper[4817]: I0314 05:37:05.924276 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.135919 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.211946 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.235529 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.332686 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.470793 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.512349 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.585735 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.601394 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.624845 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.659314 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.671183 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.874757 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 05:37:06 crc kubenswrapper[4817]: I0314 05:37:06.921367 4817 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.009389 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.089394 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.092873 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.223264 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.309560 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.340295 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.411037 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.414760 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.431362 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.575525 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.602150 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.631049 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.690275 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.709863 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.726769 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.759177 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.891483 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.953128 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 05:37:07 crc kubenswrapper[4817]: I0314 05:37:07.954627 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.028385 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.131793 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.142971 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.169712 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.254429 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.471551 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.565325 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.565378 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.618393 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.671375 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.673778 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.722999 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.794331 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.839039 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.850389 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.906687 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.963959 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 05:37:08 crc kubenswrapper[4817]: I0314 05:37:08.987051 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.139717 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.145022 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.145177 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.152323 4817 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157206 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-5mbx5"] Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157264 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6bcf78946b-ffvvb","openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 05:37:09 crc kubenswrapper[4817]: E0314 05:37:09.157446 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerName="oauth-openshift" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157456 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerName="oauth-openshift" Mar 14 05:37:09 crc kubenswrapper[4817]: E0314 05:37:09.157470 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" containerName="installer" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157476 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" containerName="installer" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157573 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb01e662-5d03-4606-bc4e-709eb9e76cd4" containerName="installer" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157591 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" containerName="oauth-openshift" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157975 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.157966 4817 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.158319 4817 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0bcd18e9-5453-44e2-a7a3-1a8c0219e7cd" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.160518 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.161105 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.161323 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.161961 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.162060 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.162198 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.162311 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.163646 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.163679 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.163647 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.164010 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.164126 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.164235 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.170913 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.174305 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.177499 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.183834 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.183818032 podStartE2EDuration="19.183818032s" podCreationTimestamp="2026-03-14 05:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:37:09.175082956 +0000 UTC m=+283.213343702" watchObservedRunningTime="2026-03-14 05:37:09.183818032 +0000 UTC m=+283.222078778" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.226737 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318010 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318053 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-session\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318074 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318093 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-audit-policies\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318149 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-error\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318188 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318205 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318223 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c413bccd-da15-4da9-9052-2d120a634837-audit-dir\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318698 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-login\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318827 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318860 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318879 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp2v8\" (UniqueName: \"kubernetes.io/projected/c413bccd-da15-4da9-9052-2d120a634837-kube-api-access-wp2v8\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318950 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.318992 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.331193 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.334141 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420321 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420373 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c413bccd-da15-4da9-9052-2d120a634837-audit-dir\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420420 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-login\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420472 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c413bccd-da15-4da9-9052-2d120a634837-audit-dir\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420488 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420556 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420586 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp2v8\" (UniqueName: \"kubernetes.io/projected/c413bccd-da15-4da9-9052-2d120a634837-kube-api-access-wp2v8\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420622 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420661 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420709 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420731 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-session\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420753 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420783 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-audit-policies\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420805 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-error\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.420833 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.421546 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-audit-policies\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.422002 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.422456 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.423408 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.426408 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.427139 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.428141 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.430944 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-session\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.430977 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.431117 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-login\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.431252 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.438933 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp2v8\" (UniqueName: \"kubernetes.io/projected/c413bccd-da15-4da9-9052-2d120a634837-kube-api-access-wp2v8\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.441254 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c413bccd-da15-4da9-9052-2d120a634837-v4-0-config-user-template-error\") pod \"oauth-openshift-6bcf78946b-ffvvb\" (UID: \"c413bccd-da15-4da9-9052-2d120a634837\") " pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.473270 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.678847 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.685715 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.688028 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.709669 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.729951 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.771163 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.802834 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.956340 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 05:37:09 crc kubenswrapper[4817]: I0314 05:37:09.962372 4817 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.064455 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.240108 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.300911 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.328126 4817 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.340180 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.349457 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.374026 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.402671 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.490978 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.542211 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.574481 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.609971 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.658886 4817 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.677659 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.739682 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef15d93-0fd9-4fc8-8f61-a29eaca479a4" path="/var/lib/kubelet/pods/9ef15d93-0fd9-4fc8-8f61-a29eaca479a4/volumes" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.842856 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 05:37:10 crc kubenswrapper[4817]: I0314 05:37:10.997495 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.019992 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.058163 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.118502 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.142021 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.207175 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.276020 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.303242 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.337313 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.368433 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.404310 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.420390 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.475042 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.497728 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.621528 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.631384 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.649457 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.723625 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.824468 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.879006 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 05:37:11 crc kubenswrapper[4817]: I0314 05:37:11.948577 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.010764 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.081782 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.175438 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.274017 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.304561 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.334113 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.346449 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.391928 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.425552 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.482144 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.485419 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.487762 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.517091 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.567978 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.674618 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.697606 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.712048 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.746090 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.751186 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.754660 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.825069 4817 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.825297 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038" gracePeriod=5 Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.859467 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.942070 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.942443 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.956857 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 05:37:12 crc kubenswrapper[4817]: I0314 05:37:12.967220 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.223860 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.304935 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.335082 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.338619 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.456720 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.474652 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.639606 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.741702 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.752153 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.846397 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.862457 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.922527 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bcf78946b-ffvvb"] Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.983789 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 05:37:13 crc kubenswrapper[4817]: I0314 05:37:13.985142 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.018357 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.212253 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.227566 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.325791 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.362513 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.409183 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.428239 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.428389 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bcf78946b-ffvvb"] Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.443644 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.478537 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.515129 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.549305 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.557375 4817 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.665423 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 05:37:14 crc kubenswrapper[4817]: I0314 05:37:14.852671 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" event={"ID":"c413bccd-da15-4da9-9052-2d120a634837","Type":"ContainerStarted","Data":"8e5944a15333907708462e8ffc38a326dc8656293551041e73b2db5c98871cb4"} Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.053258 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.166821 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.237261 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.319383 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.486062 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.526726 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.531950 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.649044 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.785786 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.862058 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" event={"ID":"c413bccd-da15-4da9-9052-2d120a634837","Type":"ContainerStarted","Data":"47ed5bdbc74e04b82a0b064874eb5cbe3064b126d75965ed567a9387676c3660"} Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.862541 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.868790 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.880626 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.889933 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bcf78946b-ffvvb" podStartSLOduration=47.889912444 podStartE2EDuration="47.889912444s" podCreationTimestamp="2026-03-14 05:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:37:15.886241777 +0000 UTC m=+289.924502543" watchObservedRunningTime="2026-03-14 05:37:15.889912444 +0000 UTC m=+289.928173200" Mar 14 05:37:15 crc kubenswrapper[4817]: I0314 05:37:15.982703 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.015343 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.077403 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.173467 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.260666 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.277709 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.528341 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.545259 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.618675 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.628683 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.786014 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.794994 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.852999 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 05:37:16 crc kubenswrapper[4817]: I0314 05:37:16.875511 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.056952 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.152372 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.320471 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.410137 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.414243 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.442930 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.597136 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.662834 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 05:37:17 crc kubenswrapper[4817]: I0314 05:37:17.800164 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.135674 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.165523 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.472247 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.472328 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.513286 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.523377 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.619754 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.657463 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.657598 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.657634 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.657672 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.657733 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.657756 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.658054 4817 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.658119 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.658156 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.658181 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.672886 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.741458 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.759545 4817 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.759840 4817 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.760023 4817 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.760156 4817 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.886608 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.886689 4817 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038" exitCode=137 Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.886749 4817 scope.go:117] "RemoveContainer" containerID="a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.886756 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.911361 4817 scope.go:117] "RemoveContainer" containerID="a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038" Mar 14 05:37:18 crc kubenswrapper[4817]: E0314 05:37:18.911924 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038\": container with ID starting with a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038 not found: ID does not exist" containerID="a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038" Mar 14 05:37:18 crc kubenswrapper[4817]: I0314 05:37:18.911961 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038"} err="failed to get container status \"a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038\": rpc error: code = NotFound desc = could not find container \"a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038\": container with ID starting with a4d9ef37eec2e2eccc7cb90e663bcb96f7d258f942580cb4de0458f721729038 not found: ID does not exist" Mar 14 05:37:20 crc kubenswrapper[4817]: I0314 05:37:20.082326 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 05:37:20 crc kubenswrapper[4817]: I0314 05:37:20.173480 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 05:37:22 crc kubenswrapper[4817]: I0314 05:37:22.983264 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669b45cd95-xn27j"] Mar 14 05:37:22 crc kubenswrapper[4817]: I0314 05:37:22.983751 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" podUID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" containerName="controller-manager" containerID="cri-o://60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3" gracePeriod=30 Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.102917 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg"] Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.103153 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" podUID="92005e43-d016-41d7-90c0-670bad0588d0" containerName="route-controller-manager" containerID="cri-o://f91e6970c5e9d258394b98977fcc968c3ea9d45ba3c110627e12f55def2a53a0" gracePeriod=30 Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.443758 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.616799 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv5fw\" (UniqueName: \"kubernetes.io/projected/3c6b4546-90e1-4ce1-8f15-394413dcfabc-kube-api-access-cv5fw\") pod \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.616911 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-proxy-ca-bundles\") pod \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.616942 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-config\") pod \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.616968 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-client-ca\") pod \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.617019 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6b4546-90e1-4ce1-8f15-394413dcfabc-serving-cert\") pod \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\" (UID: \"3c6b4546-90e1-4ce1-8f15-394413dcfabc\") " Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.617738 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c6b4546-90e1-4ce1-8f15-394413dcfabc" (UID: "3c6b4546-90e1-4ce1-8f15-394413dcfabc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.617879 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c6b4546-90e1-4ce1-8f15-394413dcfabc" (UID: "3c6b4546-90e1-4ce1-8f15-394413dcfabc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.618105 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-config" (OuterVolumeSpecName: "config") pod "3c6b4546-90e1-4ce1-8f15-394413dcfabc" (UID: "3c6b4546-90e1-4ce1-8f15-394413dcfabc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.623660 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6b4546-90e1-4ce1-8f15-394413dcfabc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c6b4546-90e1-4ce1-8f15-394413dcfabc" (UID: "3c6b4546-90e1-4ce1-8f15-394413dcfabc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.623829 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6b4546-90e1-4ce1-8f15-394413dcfabc-kube-api-access-cv5fw" (OuterVolumeSpecName: "kube-api-access-cv5fw") pod "3c6b4546-90e1-4ce1-8f15-394413dcfabc" (UID: "3c6b4546-90e1-4ce1-8f15-394413dcfabc"). InnerVolumeSpecName "kube-api-access-cv5fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.718013 4817 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.718142 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.718159 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c6b4546-90e1-4ce1-8f15-394413dcfabc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.718200 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6b4546-90e1-4ce1-8f15-394413dcfabc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.718212 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv5fw\" (UniqueName: \"kubernetes.io/projected/3c6b4546-90e1-4ce1-8f15-394413dcfabc-kube-api-access-cv5fw\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.920446 4817 generic.go:334] "Generic (PLEG): container finished" podID="92005e43-d016-41d7-90c0-670bad0588d0" containerID="f91e6970c5e9d258394b98977fcc968c3ea9d45ba3c110627e12f55def2a53a0" exitCode=0 Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.920544 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" event={"ID":"92005e43-d016-41d7-90c0-670bad0588d0","Type":"ContainerDied","Data":"f91e6970c5e9d258394b98977fcc968c3ea9d45ba3c110627e12f55def2a53a0"} Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.922151 4817 generic.go:334] "Generic (PLEG): container finished" podID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" containerID="60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3" exitCode=0 Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.922187 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" event={"ID":"3c6b4546-90e1-4ce1-8f15-394413dcfabc","Type":"ContainerDied","Data":"60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3"} Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.922214 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" event={"ID":"3c6b4546-90e1-4ce1-8f15-394413dcfabc","Type":"ContainerDied","Data":"ea782bc5044453e2ffa0340c7d7b08163616e133b4741f1f0ed9afdb8bd18de8"} Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.922228 4817 scope.go:117] "RemoveContainer" containerID="60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.922316 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669b45cd95-xn27j" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.950823 4817 scope.go:117] "RemoveContainer" containerID="60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3" Mar 14 05:37:23 crc kubenswrapper[4817]: E0314 05:37:23.951767 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3\": container with ID starting with 60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3 not found: ID does not exist" containerID="60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.951816 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3"} err="failed to get container status \"60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3\": rpc error: code = NotFound desc = could not find container \"60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3\": container with ID starting with 60fac2b52d2e3800be368bff19d3baaded9018aae6e40414008e4b9b49fe97d3 not found: ID does not exist" Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.952446 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669b45cd95-xn27j"] Mar 14 05:37:23 crc kubenswrapper[4817]: I0314 05:37:23.956310 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-669b45cd95-xn27j"] Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.049672 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.122087 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jttpm\" (UniqueName: \"kubernetes.io/projected/92005e43-d016-41d7-90c0-670bad0588d0-kube-api-access-jttpm\") pod \"92005e43-d016-41d7-90c0-670bad0588d0\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.122124 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-client-ca\") pod \"92005e43-d016-41d7-90c0-670bad0588d0\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.122167 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92005e43-d016-41d7-90c0-670bad0588d0-serving-cert\") pod \"92005e43-d016-41d7-90c0-670bad0588d0\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.122213 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-config\") pod \"92005e43-d016-41d7-90c0-670bad0588d0\" (UID: \"92005e43-d016-41d7-90c0-670bad0588d0\") " Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.122865 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-config" (OuterVolumeSpecName: "config") pod "92005e43-d016-41d7-90c0-670bad0588d0" (UID: "92005e43-d016-41d7-90c0-670bad0588d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.122859 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "92005e43-d016-41d7-90c0-670bad0588d0" (UID: "92005e43-d016-41d7-90c0-670bad0588d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.130148 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92005e43-d016-41d7-90c0-670bad0588d0-kube-api-access-jttpm" (OuterVolumeSpecName: "kube-api-access-jttpm") pod "92005e43-d016-41d7-90c0-670bad0588d0" (UID: "92005e43-d016-41d7-90c0-670bad0588d0"). InnerVolumeSpecName "kube-api-access-jttpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.130277 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92005e43-d016-41d7-90c0-670bad0588d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "92005e43-d016-41d7-90c0-670bad0588d0" (UID: "92005e43-d016-41d7-90c0-670bad0588d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.223705 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jttpm\" (UniqueName: \"kubernetes.io/projected/92005e43-d016-41d7-90c0-670bad0588d0-kube-api-access-jttpm\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.223752 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.223764 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92005e43-d016-41d7-90c0-670bad0588d0-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.223776 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92005e43-d016-41d7-90c0-670bad0588d0-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.335798 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4"] Mar 14 05:37:24 crc kubenswrapper[4817]: E0314 05:37:24.336159 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336181 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 05:37:24 crc kubenswrapper[4817]: E0314 05:37:24.336200 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92005e43-d016-41d7-90c0-670bad0588d0" containerName="route-controller-manager" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336209 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="92005e43-d016-41d7-90c0-670bad0588d0" containerName="route-controller-manager" Mar 14 05:37:24 crc kubenswrapper[4817]: E0314 05:37:24.336221 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" containerName="controller-manager" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336230 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" containerName="controller-manager" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336355 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="92005e43-d016-41d7-90c0-670bad0588d0" containerName="route-controller-manager" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336379 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336391 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" containerName="controller-manager" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.336944 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.339067 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.340318 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns"] Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.341236 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.341566 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.341634 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.341748 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.342248 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.342940 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.344984 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4"] Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.350494 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns"] Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.350768 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.526738 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5073340e-cce0-4b60-964c-7516f32e48b4-serving-cert\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.526787 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-config\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.526815 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbfa859-6e65-46a5-ae52-04d13fd9d333-serving-cert\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.526837 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-proxy-ca-bundles\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.526867 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnpb6\" (UniqueName: \"kubernetes.io/projected/5073340e-cce0-4b60-964c-7516f32e48b4-kube-api-access-jnpb6\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.527241 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-config\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.527344 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-client-ca\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.527414 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4rhn\" (UniqueName: \"kubernetes.io/projected/abbfa859-6e65-46a5-ae52-04d13fd9d333-kube-api-access-j4rhn\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.527479 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-client-ca\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628272 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-config\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628319 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbfa859-6e65-46a5-ae52-04d13fd9d333-serving-cert\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628339 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-proxy-ca-bundles\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628366 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnpb6\" (UniqueName: \"kubernetes.io/projected/5073340e-cce0-4b60-964c-7516f32e48b4-kube-api-access-jnpb6\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628410 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-config\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628434 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-client-ca\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628459 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4rhn\" (UniqueName: \"kubernetes.io/projected/abbfa859-6e65-46a5-ae52-04d13fd9d333-kube-api-access-j4rhn\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628485 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-client-ca\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.628508 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5073340e-cce0-4b60-964c-7516f32e48b4-serving-cert\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.629653 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-client-ca\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.630175 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-proxy-ca-bundles\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.630311 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-client-ca\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.630415 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbfa859-6e65-46a5-ae52-04d13fd9d333-config\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.630751 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-config\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.632412 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbfa859-6e65-46a5-ae52-04d13fd9d333-serving-cert\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.633192 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5073340e-cce0-4b60-964c-7516f32e48b4-serving-cert\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.650530 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4rhn\" (UniqueName: \"kubernetes.io/projected/abbfa859-6e65-46a5-ae52-04d13fd9d333-kube-api-access-j4rhn\") pod \"controller-manager-58c9cff4b6-zhlt4\" (UID: \"abbfa859-6e65-46a5-ae52-04d13fd9d333\") " pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.654184 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnpb6\" (UniqueName: \"kubernetes.io/projected/5073340e-cce0-4b60-964c-7516f32e48b4-kube-api-access-jnpb6\") pod \"route-controller-manager-7f46bdc698-gvkns\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.660805 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.679781 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.745539 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6b4546-90e1-4ce1-8f15-394413dcfabc" path="/var/lib/kubelet/pods/3c6b4546-90e1-4ce1-8f15-394413dcfabc/volumes" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.933872 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.933864 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg" event={"ID":"92005e43-d016-41d7-90c0-670bad0588d0","Type":"ContainerDied","Data":"bd8cccaefc099309f08e2716047ab2aba51df1f8cff8231ec5255436f72f0e4e"} Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.934079 4817 scope.go:117] "RemoveContainer" containerID="f91e6970c5e9d258394b98977fcc968c3ea9d45ba3c110627e12f55def2a53a0" Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.947797 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg"] Mar 14 05:37:24 crc kubenswrapper[4817]: I0314 05:37:24.951331 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d64f9b597-mvjmg"] Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.080416 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns"] Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.135244 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4"] Mar 14 05:37:25 crc kubenswrapper[4817]: W0314 05:37:25.136781 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabbfa859_6e65_46a5_ae52_04d13fd9d333.slice/crio-7a9310858d17265c9c3b042842342124fd39c0e7009a4f0ae1c8ce1a0a398f00 WatchSource:0}: Error finding container 7a9310858d17265c9c3b042842342124fd39c0e7009a4f0ae1c8ce1a0a398f00: Status 404 returned error can't find the container with id 7a9310858d17265c9c3b042842342124fd39c0e7009a4f0ae1c8ce1a0a398f00 Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.942674 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" event={"ID":"5073340e-cce0-4b60-964c-7516f32e48b4","Type":"ContainerStarted","Data":"3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066"} Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.943255 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.943396 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" event={"ID":"5073340e-cce0-4b60-964c-7516f32e48b4","Type":"ContainerStarted","Data":"8c2af79185fff26cb2db3cea7c94a107d7ba0df10caae72f7e1d4548173bb1c0"} Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.945472 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" event={"ID":"abbfa859-6e65-46a5-ae52-04d13fd9d333","Type":"ContainerStarted","Data":"1daedfcb349d6351025b7ea250a8048c8c7bf7712c0d516d290e3369a7d12534"} Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.945530 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" event={"ID":"abbfa859-6e65-46a5-ae52-04d13fd9d333","Type":"ContainerStarted","Data":"7a9310858d17265c9c3b042842342124fd39c0e7009a4f0ae1c8ce1a0a398f00"} Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.945883 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.949021 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.950134 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.964016 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" podStartSLOduration=2.963991824 podStartE2EDuration="2.963991824s" podCreationTimestamp="2026-03-14 05:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:37:25.959811992 +0000 UTC m=+299.998072748" watchObservedRunningTime="2026-03-14 05:37:25.963991824 +0000 UTC m=+300.002252570" Mar 14 05:37:25 crc kubenswrapper[4817]: I0314 05:37:25.983877 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c9cff4b6-zhlt4" podStartSLOduration=2.983858356 podStartE2EDuration="2.983858356s" podCreationTimestamp="2026-03-14 05:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:37:25.982345172 +0000 UTC m=+300.020605938" watchObservedRunningTime="2026-03-14 05:37:25.983858356 +0000 UTC m=+300.022119112" Mar 14 05:37:26 crc kubenswrapper[4817]: I0314 05:37:26.740332 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92005e43-d016-41d7-90c0-670bad0588d0" path="/var/lib/kubelet/pods/92005e43-d016-41d7-90c0-670bad0588d0/volumes" Mar 14 05:37:38 crc kubenswrapper[4817]: I0314 05:37:38.565775 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:37:38 crc kubenswrapper[4817]: I0314 05:37:38.566405 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:37:38 crc kubenswrapper[4817]: I0314 05:37:38.566457 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:37:38 crc kubenswrapper[4817]: I0314 05:37:38.567086 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:37:38 crc kubenswrapper[4817]: I0314 05:37:38.567161 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda" gracePeriod=600 Mar 14 05:37:40 crc kubenswrapper[4817]: I0314 05:37:40.026469 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda" exitCode=0 Mar 14 05:37:40 crc kubenswrapper[4817]: I0314 05:37:40.026556 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda"} Mar 14 05:37:40 crc kubenswrapper[4817]: I0314 05:37:40.027052 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"7e635f45e2fa41c4eef67a52269f590c74e82163f2975e7047a02129a72dd1f8"} Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.003654 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns"] Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.004960 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" podUID="5073340e-cce0-4b60-964c-7516f32e48b4" containerName="route-controller-manager" containerID="cri-o://3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066" gracePeriod=30 Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.444463 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.513714 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-client-ca\") pod \"5073340e-cce0-4b60-964c-7516f32e48b4\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.513799 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnpb6\" (UniqueName: \"kubernetes.io/projected/5073340e-cce0-4b60-964c-7516f32e48b4-kube-api-access-jnpb6\") pod \"5073340e-cce0-4b60-964c-7516f32e48b4\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.513849 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-config\") pod \"5073340e-cce0-4b60-964c-7516f32e48b4\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.513955 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5073340e-cce0-4b60-964c-7516f32e48b4-serving-cert\") pod \"5073340e-cce0-4b60-964c-7516f32e48b4\" (UID: \"5073340e-cce0-4b60-964c-7516f32e48b4\") " Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.516162 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "5073340e-cce0-4b60-964c-7516f32e48b4" (UID: "5073340e-cce0-4b60-964c-7516f32e48b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.516312 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-config" (OuterVolumeSpecName: "config") pod "5073340e-cce0-4b60-964c-7516f32e48b4" (UID: "5073340e-cce0-4b60-964c-7516f32e48b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.526187 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5073340e-cce0-4b60-964c-7516f32e48b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5073340e-cce0-4b60-964c-7516f32e48b4" (UID: "5073340e-cce0-4b60-964c-7516f32e48b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.526322 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5073340e-cce0-4b60-964c-7516f32e48b4-kube-api-access-jnpb6" (OuterVolumeSpecName: "kube-api-access-jnpb6") pod "5073340e-cce0-4b60-964c-7516f32e48b4" (UID: "5073340e-cce0-4b60-964c-7516f32e48b4"). InnerVolumeSpecName "kube-api-access-jnpb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.615711 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5073340e-cce0-4b60-964c-7516f32e48b4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.615751 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.615764 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnpb6\" (UniqueName: \"kubernetes.io/projected/5073340e-cce0-4b60-964c-7516f32e48b4-kube-api-access-jnpb6\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:43 crc kubenswrapper[4817]: I0314 05:37:43.615779 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5073340e-cce0-4b60-964c-7516f32e48b4-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.048605 4817 generic.go:334] "Generic (PLEG): container finished" podID="5073340e-cce0-4b60-964c-7516f32e48b4" containerID="3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066" exitCode=0 Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.048656 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" event={"ID":"5073340e-cce0-4b60-964c-7516f32e48b4","Type":"ContainerDied","Data":"3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066"} Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.048687 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" event={"ID":"5073340e-cce0-4b60-964c-7516f32e48b4","Type":"ContainerDied","Data":"8c2af79185fff26cb2db3cea7c94a107d7ba0df10caae72f7e1d4548173bb1c0"} Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.048708 4817 scope.go:117] "RemoveContainer" containerID="3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.048834 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.071476 4817 scope.go:117] "RemoveContainer" containerID="3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066" Mar 14 05:37:44 crc kubenswrapper[4817]: E0314 05:37:44.071886 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066\": container with ID starting with 3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066 not found: ID does not exist" containerID="3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.071957 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066"} err="failed to get container status \"3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066\": rpc error: code = NotFound desc = could not find container \"3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066\": container with ID starting with 3dc256b335864dc639628abc1f8f5d110bdaa28552d3997824db12b19483a066 not found: ID does not exist" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.090221 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns"] Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.096467 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-gvkns"] Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.334457 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg"] Mar 14 05:37:44 crc kubenswrapper[4817]: E0314 05:37:44.335169 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5073340e-cce0-4b60-964c-7516f32e48b4" containerName="route-controller-manager" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.335193 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5073340e-cce0-4b60-964c-7516f32e48b4" containerName="route-controller-manager" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.335300 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5073340e-cce0-4b60-964c-7516f32e48b4" containerName="route-controller-manager" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.335653 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.340225 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.340358 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.340224 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.341139 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.341408 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.343250 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.349875 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg"] Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.425880 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be54f901-3a8c-4d2a-a166-27374c11f422-serving-cert\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.425954 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt4gh\" (UniqueName: \"kubernetes.io/projected/be54f901-3a8c-4d2a-a166-27374c11f422-kube-api-access-rt4gh\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.425997 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-config\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.426023 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-client-ca\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.526645 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-client-ca\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.527488 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-client-ca\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.527608 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be54f901-3a8c-4d2a-a166-27374c11f422-serving-cert\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.528325 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt4gh\" (UniqueName: \"kubernetes.io/projected/be54f901-3a8c-4d2a-a166-27374c11f422-kube-api-access-rt4gh\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.528384 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-config\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.529270 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-config\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.537508 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be54f901-3a8c-4d2a-a166-27374c11f422-serving-cert\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.548497 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt4gh\" (UniqueName: \"kubernetes.io/projected/be54f901-3a8c-4d2a-a166-27374c11f422-kube-api-access-rt4gh\") pod \"route-controller-manager-675dc77d8c-5s5fg\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.654162 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:44 crc kubenswrapper[4817]: I0314 05:37:44.739574 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5073340e-cce0-4b60-964c-7516f32e48b4" path="/var/lib/kubelet/pods/5073340e-cce0-4b60-964c-7516f32e48b4/volumes" Mar 14 05:37:45 crc kubenswrapper[4817]: I0314 05:37:45.067126 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg"] Mar 14 05:37:45 crc kubenswrapper[4817]: W0314 05:37:45.072761 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe54f901_3a8c_4d2a_a166_27374c11f422.slice/crio-e0000cfca11b52f88b0f64364a97956ff7e0f1d413c22de8608bdb2a3dbe7354 WatchSource:0}: Error finding container e0000cfca11b52f88b0f64364a97956ff7e0f1d413c22de8608bdb2a3dbe7354: Status 404 returned error can't find the container with id e0000cfca11b52f88b0f64364a97956ff7e0f1d413c22de8608bdb2a3dbe7354 Mar 14 05:37:46 crc kubenswrapper[4817]: I0314 05:37:46.062988 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" event={"ID":"be54f901-3a8c-4d2a-a166-27374c11f422","Type":"ContainerStarted","Data":"db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006"} Mar 14 05:37:46 crc kubenswrapper[4817]: I0314 05:37:46.063366 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" event={"ID":"be54f901-3a8c-4d2a-a166-27374c11f422","Type":"ContainerStarted","Data":"e0000cfca11b52f88b0f64364a97956ff7e0f1d413c22de8608bdb2a3dbe7354"} Mar 14 05:37:46 crc kubenswrapper[4817]: I0314 05:37:46.063553 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:46 crc kubenswrapper[4817]: I0314 05:37:46.069854 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:37:46 crc kubenswrapper[4817]: I0314 05:37:46.098509 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" podStartSLOduration=3.098486342 podStartE2EDuration="3.098486342s" podCreationTimestamp="2026-03-14 05:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:37:46.092305981 +0000 UTC m=+320.130566737" watchObservedRunningTime="2026-03-14 05:37:46.098486342 +0000 UTC m=+320.136747098" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.171371 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cs45w"] Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.171733 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cs45w" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="registry-server" containerID="cri-o://908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96" gracePeriod=2 Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.643751 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.682259 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-utilities\") pod \"d94326da-6089-4fb4-be56-29635a38651f\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.682314 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-catalog-content\") pod \"d94326da-6089-4fb4-be56-29635a38651f\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.682346 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4fjd\" (UniqueName: \"kubernetes.io/projected/d94326da-6089-4fb4-be56-29635a38651f-kube-api-access-g4fjd\") pod \"d94326da-6089-4fb4-be56-29635a38651f\" (UID: \"d94326da-6089-4fb4-be56-29635a38651f\") " Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.683305 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-utilities" (OuterVolumeSpecName: "utilities") pod "d94326da-6089-4fb4-be56-29635a38651f" (UID: "d94326da-6089-4fb4-be56-29635a38651f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.687863 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94326da-6089-4fb4-be56-29635a38651f-kube-api-access-g4fjd" (OuterVolumeSpecName: "kube-api-access-g4fjd") pod "d94326da-6089-4fb4-be56-29635a38651f" (UID: "d94326da-6089-4fb4-be56-29635a38651f"). InnerVolumeSpecName "kube-api-access-g4fjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.783706 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.783754 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4fjd\" (UniqueName: \"kubernetes.io/projected/d94326da-6089-4fb4-be56-29635a38651f-kube-api-access-g4fjd\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.816741 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d94326da-6089-4fb4-be56-29635a38651f" (UID: "d94326da-6089-4fb4-be56-29635a38651f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:37:47 crc kubenswrapper[4817]: I0314 05:37:47.885269 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94326da-6089-4fb4-be56-29635a38651f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.073933 4817 generic.go:334] "Generic (PLEG): container finished" podID="d94326da-6089-4fb4-be56-29635a38651f" containerID="908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96" exitCode=0 Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.073980 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cs45w" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.074009 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerDied","Data":"908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96"} Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.074060 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cs45w" event={"ID":"d94326da-6089-4fb4-be56-29635a38651f","Type":"ContainerDied","Data":"4c55967916e4e5fb0756aaf20d9c8482c64bea5075b7fec98771ae0a15d5bf11"} Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.074078 4817 scope.go:117] "RemoveContainer" containerID="908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.088496 4817 scope.go:117] "RemoveContainer" containerID="6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.099066 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cs45w"] Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.121344 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cs45w"] Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.122520 4817 scope.go:117] "RemoveContainer" containerID="70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.136522 4817 scope.go:117] "RemoveContainer" containerID="908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96" Mar 14 05:37:48 crc kubenswrapper[4817]: E0314 05:37:48.137078 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96\": container with ID starting with 908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96 not found: ID does not exist" containerID="908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.137122 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96"} err="failed to get container status \"908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96\": rpc error: code = NotFound desc = could not find container \"908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96\": container with ID starting with 908bc94f9557e3a0cb6d401bc195681274ef21f5a92497f0dc8caa73716b5c96 not found: ID does not exist" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.137155 4817 scope.go:117] "RemoveContainer" containerID="6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a" Mar 14 05:37:48 crc kubenswrapper[4817]: E0314 05:37:48.137466 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a\": container with ID starting with 6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a not found: ID does not exist" containerID="6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.137488 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a"} err="failed to get container status \"6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a\": rpc error: code = NotFound desc = could not find container \"6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a\": container with ID starting with 6faabaa7226c68a0b7e934c2e36464b6b7df7c92f28ec26f5d8951dec4cd5c5a not found: ID does not exist" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.137503 4817 scope.go:117] "RemoveContainer" containerID="70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c" Mar 14 05:37:48 crc kubenswrapper[4817]: E0314 05:37:48.138138 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c\": container with ID starting with 70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c not found: ID does not exist" containerID="70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.138169 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c"} err="failed to get container status \"70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c\": rpc error: code = NotFound desc = could not find container \"70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c\": container with ID starting with 70b9189cd32bf1e5e5da308cee1defe1f152ab494315b86a445e0881c0c0139c not found: ID does not exist" Mar 14 05:37:48 crc kubenswrapper[4817]: I0314 05:37:48.738935 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d94326da-6089-4fb4-be56-29635a38651f" path="/var/lib/kubelet/pods/d94326da-6089-4fb4-be56-29635a38651f/volumes" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.169609 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557778-7pqht"] Mar 14 05:38:00 crc kubenswrapper[4817]: E0314 05:38:00.170319 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="extract-content" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.170331 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="extract-content" Mar 14 05:38:00 crc kubenswrapper[4817]: E0314 05:38:00.170338 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="extract-utilities" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.170345 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="extract-utilities" Mar 14 05:38:00 crc kubenswrapper[4817]: E0314 05:38:00.170354 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="registry-server" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.170362 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="registry-server" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.170451 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d94326da-6089-4fb4-be56-29635a38651f" containerName="registry-server" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.170779 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.172718 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.173493 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.175042 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.185170 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557778-7pqht"] Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.335016 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65n6\" (UniqueName: \"kubernetes.io/projected/6044a7d1-0c67-4d5f-91a7-0c856ad34078-kube-api-access-m65n6\") pod \"auto-csr-approver-29557778-7pqht\" (UID: \"6044a7d1-0c67-4d5f-91a7-0c856ad34078\") " pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.436357 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65n6\" (UniqueName: \"kubernetes.io/projected/6044a7d1-0c67-4d5f-91a7-0c856ad34078-kube-api-access-m65n6\") pod \"auto-csr-approver-29557778-7pqht\" (UID: \"6044a7d1-0c67-4d5f-91a7-0c856ad34078\") " pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.466937 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65n6\" (UniqueName: \"kubernetes.io/projected/6044a7d1-0c67-4d5f-91a7-0c856ad34078-kube-api-access-m65n6\") pod \"auto-csr-approver-29557778-7pqht\" (UID: \"6044a7d1-0c67-4d5f-91a7-0c856ad34078\") " pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.488611 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:00 crc kubenswrapper[4817]: I0314 05:38:00.902431 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557778-7pqht"] Mar 14 05:38:00 crc kubenswrapper[4817]: W0314 05:38:00.910011 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6044a7d1_0c67_4d5f_91a7_0c856ad34078.slice/crio-90f58a691951a658ed39dbeb670c8f89713f745d569bf45546105c7feaca2221 WatchSource:0}: Error finding container 90f58a691951a658ed39dbeb670c8f89713f745d569bf45546105c7feaca2221: Status 404 returned error can't find the container with id 90f58a691951a658ed39dbeb670c8f89713f745d569bf45546105c7feaca2221 Mar 14 05:38:01 crc kubenswrapper[4817]: I0314 05:38:01.152966 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557778-7pqht" event={"ID":"6044a7d1-0c67-4d5f-91a7-0c856ad34078","Type":"ContainerStarted","Data":"90f58a691951a658ed39dbeb670c8f89713f745d569bf45546105c7feaca2221"} Mar 14 05:38:03 crc kubenswrapper[4817]: I0314 05:38:03.466579 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg"] Mar 14 05:38:03 crc kubenswrapper[4817]: I0314 05:38:03.467227 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" podUID="be54f901-3a8c-4d2a-a166-27374c11f422" containerName="route-controller-manager" containerID="cri-o://db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006" gracePeriod=30 Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.164244 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.174916 4817 generic.go:334] "Generic (PLEG): container finished" podID="6044a7d1-0c67-4d5f-91a7-0c856ad34078" containerID="82a1b183acf038e01ad735f258effb5e2efac59e3b048f6b5fcfcd879d0f9a26" exitCode=0 Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.174989 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557778-7pqht" event={"ID":"6044a7d1-0c67-4d5f-91a7-0c856ad34078","Type":"ContainerDied","Data":"82a1b183acf038e01ad735f258effb5e2efac59e3b048f6b5fcfcd879d0f9a26"} Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.175983 4817 generic.go:334] "Generic (PLEG): container finished" podID="be54f901-3a8c-4d2a-a166-27374c11f422" containerID="db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006" exitCode=0 Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.176006 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" event={"ID":"be54f901-3a8c-4d2a-a166-27374c11f422","Type":"ContainerDied","Data":"db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006"} Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.176027 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" event={"ID":"be54f901-3a8c-4d2a-a166-27374c11f422","Type":"ContainerDied","Data":"e0000cfca11b52f88b0f64364a97956ff7e0f1d413c22de8608bdb2a3dbe7354"} Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.176045 4817 scope.go:117] "RemoveContainer" containerID="db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.176060 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.200204 4817 scope.go:117] "RemoveContainer" containerID="db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006" Mar 14 05:38:04 crc kubenswrapper[4817]: E0314 05:38:04.200803 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006\": container with ID starting with db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006 not found: ID does not exist" containerID="db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.200828 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006"} err="failed to get container status \"db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006\": rpc error: code = NotFound desc = could not find container \"db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006\": container with ID starting with db585f32c3898fd9f82b5e0c31f5f81a16e777d84642a5a81051b52c6df7b006 not found: ID does not exist" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.287375 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt4gh\" (UniqueName: \"kubernetes.io/projected/be54f901-3a8c-4d2a-a166-27374c11f422-kube-api-access-rt4gh\") pod \"be54f901-3a8c-4d2a-a166-27374c11f422\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.287429 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be54f901-3a8c-4d2a-a166-27374c11f422-serving-cert\") pod \"be54f901-3a8c-4d2a-a166-27374c11f422\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.287461 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-client-ca\") pod \"be54f901-3a8c-4d2a-a166-27374c11f422\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.287516 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-config\") pod \"be54f901-3a8c-4d2a-a166-27374c11f422\" (UID: \"be54f901-3a8c-4d2a-a166-27374c11f422\") " Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.288260 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-client-ca" (OuterVolumeSpecName: "client-ca") pod "be54f901-3a8c-4d2a-a166-27374c11f422" (UID: "be54f901-3a8c-4d2a-a166-27374c11f422"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.288300 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-config" (OuterVolumeSpecName: "config") pod "be54f901-3a8c-4d2a-a166-27374c11f422" (UID: "be54f901-3a8c-4d2a-a166-27374c11f422"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.292206 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be54f901-3a8c-4d2a-a166-27374c11f422-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be54f901-3a8c-4d2a-a166-27374c11f422" (UID: "be54f901-3a8c-4d2a-a166-27374c11f422"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.293054 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be54f901-3a8c-4d2a-a166-27374c11f422-kube-api-access-rt4gh" (OuterVolumeSpecName: "kube-api-access-rt4gh") pod "be54f901-3a8c-4d2a-a166-27374c11f422" (UID: "be54f901-3a8c-4d2a-a166-27374c11f422"). InnerVolumeSpecName "kube-api-access-rt4gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.388600 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt4gh\" (UniqueName: \"kubernetes.io/projected/be54f901-3a8c-4d2a-a166-27374c11f422-kube-api-access-rt4gh\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.388630 4817 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be54f901-3a8c-4d2a-a166-27374c11f422-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.388640 4817 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.388649 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be54f901-3a8c-4d2a-a166-27374c11f422-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.505108 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg"] Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.511734 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675dc77d8c-5s5fg"] Mar 14 05:38:04 crc kubenswrapper[4817]: I0314 05:38:04.737861 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be54f901-3a8c-4d2a-a166-27374c11f422" path="/var/lib/kubelet/pods/be54f901-3a8c-4d2a-a166-27374c11f422/volumes" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.361496 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg"] Mar 14 05:38:05 crc kubenswrapper[4817]: E0314 05:38:05.361826 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be54f901-3a8c-4d2a-a166-27374c11f422" containerName="route-controller-manager" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.361855 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="be54f901-3a8c-4d2a-a166-27374c11f422" containerName="route-controller-manager" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.362924 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="be54f901-3a8c-4d2a-a166-27374c11f422" containerName="route-controller-manager" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.365621 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.369706 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.370085 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.370260 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.370783 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.370992 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.371067 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.372804 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg"] Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.402306 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-serving-cert\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.402364 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-client-ca\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.402390 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-config\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.402428 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4g5\" (UniqueName: \"kubernetes.io/projected/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-kube-api-access-8z4g5\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.475788 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.503811 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m65n6\" (UniqueName: \"kubernetes.io/projected/6044a7d1-0c67-4d5f-91a7-0c856ad34078-kube-api-access-m65n6\") pod \"6044a7d1-0c67-4d5f-91a7-0c856ad34078\" (UID: \"6044a7d1-0c67-4d5f-91a7-0c856ad34078\") " Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.504025 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-serving-cert\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.504070 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-client-ca\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.504092 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-config\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.504140 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z4g5\" (UniqueName: \"kubernetes.io/projected/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-kube-api-access-8z4g5\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.505149 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-client-ca\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.505309 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-config\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.508347 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6044a7d1-0c67-4d5f-91a7-0c856ad34078-kube-api-access-m65n6" (OuterVolumeSpecName: "kube-api-access-m65n6") pod "6044a7d1-0c67-4d5f-91a7-0c856ad34078" (UID: "6044a7d1-0c67-4d5f-91a7-0c856ad34078"). InnerVolumeSpecName "kube-api-access-m65n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.510147 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-serving-cert\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.520921 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z4g5\" (UniqueName: \"kubernetes.io/projected/79dc1767-f76c-4bfb-9c0d-76b171cc1cfd-kube-api-access-8z4g5\") pod \"route-controller-manager-7f46bdc698-ln8dg\" (UID: \"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd\") " pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.605205 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m65n6\" (UniqueName: \"kubernetes.io/projected/6044a7d1-0c67-4d5f-91a7-0c856ad34078-kube-api-access-m65n6\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.632530 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dmw62"] Mar 14 05:38:05 crc kubenswrapper[4817]: E0314 05:38:05.632733 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6044a7d1-0c67-4d5f-91a7-0c856ad34078" containerName="oc" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.632743 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6044a7d1-0c67-4d5f-91a7-0c856ad34078" containerName="oc" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.632843 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6044a7d1-0c67-4d5f-91a7-0c856ad34078" containerName="oc" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.633224 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.647159 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dmw62"] Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.696701 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.807772 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249427e9-4e77-4fcc-a7df-c293aae130fc-trusted-ca\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808168 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-bound-sa-token\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808202 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/249427e9-4e77-4fcc-a7df-c293aae130fc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808231 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/249427e9-4e77-4fcc-a7df-c293aae130fc-registry-certificates\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808264 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-registry-tls\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808345 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzflb\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-kube-api-access-gzflb\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808373 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/249427e9-4e77-4fcc-a7df-c293aae130fc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.808410 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.836535 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.909813 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249427e9-4e77-4fcc-a7df-c293aae130fc-trusted-ca\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.909943 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-bound-sa-token\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.909976 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/249427e9-4e77-4fcc-a7df-c293aae130fc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.910011 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/249427e9-4e77-4fcc-a7df-c293aae130fc-registry-certificates\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.910045 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-registry-tls\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.910093 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzflb\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-kube-api-access-gzflb\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.910116 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/249427e9-4e77-4fcc-a7df-c293aae130fc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.911035 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249427e9-4e77-4fcc-a7df-c293aae130fc-trusted-ca\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.911399 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/249427e9-4e77-4fcc-a7df-c293aae130fc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.912966 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/249427e9-4e77-4fcc-a7df-c293aae130fc-registry-certificates\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.918667 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/249427e9-4e77-4fcc-a7df-c293aae130fc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.918704 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-registry-tls\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.925335 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-bound-sa-token\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.929439 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzflb\" (UniqueName: \"kubernetes.io/projected/249427e9-4e77-4fcc-a7df-c293aae130fc-kube-api-access-gzflb\") pod \"image-registry-66df7c8f76-dmw62\" (UID: \"249427e9-4e77-4fcc-a7df-c293aae130fc\") " pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:05 crc kubenswrapper[4817]: I0314 05:38:05.948256 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:06 crc kubenswrapper[4817]: I0314 05:38:06.067202 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg"] Mar 14 05:38:06 crc kubenswrapper[4817]: W0314 05:38:06.078155 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79dc1767_f76c_4bfb_9c0d_76b171cc1cfd.slice/crio-20169b232ee0ba180ef623199233061697856ea05e5bc84230d8fc35f2839479 WatchSource:0}: Error finding container 20169b232ee0ba180ef623199233061697856ea05e5bc84230d8fc35f2839479: Status 404 returned error can't find the container with id 20169b232ee0ba180ef623199233061697856ea05e5bc84230d8fc35f2839479 Mar 14 05:38:06 crc kubenswrapper[4817]: I0314 05:38:06.191605 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" event={"ID":"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd","Type":"ContainerStarted","Data":"20169b232ee0ba180ef623199233061697856ea05e5bc84230d8fc35f2839479"} Mar 14 05:38:06 crc kubenswrapper[4817]: I0314 05:38:06.193260 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557778-7pqht" event={"ID":"6044a7d1-0c67-4d5f-91a7-0c856ad34078","Type":"ContainerDied","Data":"90f58a691951a658ed39dbeb670c8f89713f745d569bf45546105c7feaca2221"} Mar 14 05:38:06 crc kubenswrapper[4817]: I0314 05:38:06.193286 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90f58a691951a658ed39dbeb670c8f89713f745d569bf45546105c7feaca2221" Mar 14 05:38:06 crc kubenswrapper[4817]: I0314 05:38:06.193375 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557778-7pqht" Mar 14 05:38:06 crc kubenswrapper[4817]: I0314 05:38:06.332226 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dmw62"] Mar 14 05:38:06 crc kubenswrapper[4817]: W0314 05:38:06.339114 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod249427e9_4e77_4fcc_a7df_c293aae130fc.slice/crio-7dd4343f91c6399e999416541337478ec8b7b5dbbc5570f24e4292ca2828ef37 WatchSource:0}: Error finding container 7dd4343f91c6399e999416541337478ec8b7b5dbbc5570f24e4292ca2828ef37: Status 404 returned error can't find the container with id 7dd4343f91c6399e999416541337478ec8b7b5dbbc5570f24e4292ca2828ef37 Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.203825 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" event={"ID":"249427e9-4e77-4fcc-a7df-c293aae130fc","Type":"ContainerStarted","Data":"a504d89a5eff19deefe889730d2e321323f8fafa08d8dfc43e480574bfcc5da9"} Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.204278 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" event={"ID":"249427e9-4e77-4fcc-a7df-c293aae130fc","Type":"ContainerStarted","Data":"7dd4343f91c6399e999416541337478ec8b7b5dbbc5570f24e4292ca2828ef37"} Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.204310 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.207998 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" event={"ID":"79dc1767-f76c-4bfb-9c0d-76b171cc1cfd","Type":"ContainerStarted","Data":"a590934250e3f5278de6ceb553b6c7b563ee9e1e7c95dc0e960713bd95e894f9"} Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.208237 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.213713 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" Mar 14 05:38:07 crc kubenswrapper[4817]: I0314 05:38:07.231123 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" podStartSLOduration=2.231095229 podStartE2EDuration="2.231095229s" podCreationTimestamp="2026-03-14 05:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:38:07.223837737 +0000 UTC m=+341.262098593" watchObservedRunningTime="2026-03-14 05:38:07.231095229 +0000 UTC m=+341.269356015" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.246231 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f46bdc698-ln8dg" podStartSLOduration=12.246214318 podStartE2EDuration="12.246214318s" podCreationTimestamp="2026-03-14 05:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:38:07.248569537 +0000 UTC m=+341.286830293" watchObservedRunningTime="2026-03-14 05:38:15.246214318 +0000 UTC m=+349.284475064" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.248961 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxvkz"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.249190 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mxvkz" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="registry-server" containerID="cri-o://67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d" gracePeriod=30 Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.291000 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbjqr"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.291863 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hbjqr" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="registry-server" containerID="cri-o://d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a" gracePeriod=30 Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.299443 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5cqlk"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.299717 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" containerID="cri-o://41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524" gracePeriod=30 Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.315326 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n45mx"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.315669 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n45mx" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="registry-server" containerID="cri-o://e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066" gracePeriod=30 Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.323215 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtb86"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.324010 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.337264 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nwvm"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.337529 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nwvm" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="registry-server" containerID="cri-o://62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610" gracePeriod=30 Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.342710 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtb86"] Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.477569 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t89n\" (UniqueName: \"kubernetes.io/projected/22e59375-f50e-4050-aeeb-a305ffcb3572-kube-api-access-7t89n\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.477616 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22e59375-f50e-4050-aeeb-a305ffcb3572-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.477647 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/22e59375-f50e-4050-aeeb-a305ffcb3572-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.578577 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/22e59375-f50e-4050-aeeb-a305ffcb3572-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.578964 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t89n\" (UniqueName: \"kubernetes.io/projected/22e59375-f50e-4050-aeeb-a305ffcb3572-kube-api-access-7t89n\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.578993 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22e59375-f50e-4050-aeeb-a305ffcb3572-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.580009 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22e59375-f50e-4050-aeeb-a305ffcb3572-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.589151 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/22e59375-f50e-4050-aeeb-a305ffcb3572-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.602718 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t89n\" (UniqueName: \"kubernetes.io/projected/22e59375-f50e-4050-aeeb-a305ffcb3572-kube-api-access-7t89n\") pod \"marketplace-operator-79b997595-gtb86\" (UID: \"22e59375-f50e-4050-aeeb-a305ffcb3572\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.685541 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.755463 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.786223 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.793966 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.883420 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-catalog-content\") pod \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.883485 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-utilities\") pod \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.883516 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-utilities\") pod \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.883540 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxjcn\" (UniqueName: \"kubernetes.io/projected/3bf969ab-d18a-43ef-88be-3e1337f14b4d-kube-api-access-rxjcn\") pod \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.883574 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-catalog-content\") pod \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\" (UID: \"3bf969ab-d18a-43ef-88be-3e1337f14b4d\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.883589 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jxws\" (UniqueName: \"kubernetes.io/projected/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-kube-api-access-9jxws\") pod \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\" (UID: \"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.884606 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-utilities" (OuterVolumeSpecName: "utilities") pod "7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" (UID: "7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.885170 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-utilities" (OuterVolumeSpecName: "utilities") pod "3bf969ab-d18a-43ef-88be-3e1337f14b4d" (UID: "3bf969ab-d18a-43ef-88be-3e1337f14b4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.889225 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.901947 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf969ab-d18a-43ef-88be-3e1337f14b4d-kube-api-access-rxjcn" (OuterVolumeSpecName: "kube-api-access-rxjcn") pod "3bf969ab-d18a-43ef-88be-3e1337f14b4d" (UID: "3bf969ab-d18a-43ef-88be-3e1337f14b4d"). InnerVolumeSpecName "kube-api-access-rxjcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.905355 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-kube-api-access-9jxws" (OuterVolumeSpecName: "kube-api-access-9jxws") pod "7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" (UID: "7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf"). InnerVolumeSpecName "kube-api-access-9jxws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.905630 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.959591 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" (UID: "7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.965530 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf969ab-d18a-43ef-88be-3e1337f14b4d" (UID: "3bf969ab-d18a-43ef-88be-3e1337f14b4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984620 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfwll\" (UniqueName: \"kubernetes.io/projected/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-kube-api-access-zfwll\") pod \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984675 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlrgl\" (UniqueName: \"kubernetes.io/projected/c132937c-20aa-47d7-903b-92a9ec65ba6f-kube-api-access-hlrgl\") pod \"c132937c-20aa-47d7-903b-92a9ec65ba6f\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984695 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-utilities\") pod \"c132937c-20aa-47d7-903b-92a9ec65ba6f\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984717 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-operator-metrics\") pod \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984748 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbd22\" (UniqueName: \"kubernetes.io/projected/5624b850-5ca9-47d2-82e9-52bbc3829bc5-kube-api-access-xbd22\") pod \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984771 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-catalog-content\") pod \"c132937c-20aa-47d7-903b-92a9ec65ba6f\" (UID: \"c132937c-20aa-47d7-903b-92a9ec65ba6f\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984791 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-trusted-ca\") pod \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\" (UID: \"5624b850-5ca9-47d2-82e9-52bbc3829bc5\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984807 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-utilities\") pod \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984832 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-catalog-content\") pod \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\" (UID: \"6cad99d4-915e-406a-bca8-2b58fdc7c7ac\") " Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984989 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.984999 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.985008 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.985016 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxjcn\" (UniqueName: \"kubernetes.io/projected/3bf969ab-d18a-43ef-88be-3e1337f14b4d-kube-api-access-rxjcn\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.985025 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf969ab-d18a-43ef-88be-3e1337f14b4d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.985034 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jxws\" (UniqueName: \"kubernetes.io/projected/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf-kube-api-access-9jxws\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.985749 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-utilities" (OuterVolumeSpecName: "utilities") pod "c132937c-20aa-47d7-903b-92a9ec65ba6f" (UID: "c132937c-20aa-47d7-903b-92a9ec65ba6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.985833 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5624b850-5ca9-47d2-82e9-52bbc3829bc5" (UID: "5624b850-5ca9-47d2-82e9-52bbc3829bc5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.986216 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-utilities" (OuterVolumeSpecName: "utilities") pod "6cad99d4-915e-406a-bca8-2b58fdc7c7ac" (UID: "6cad99d4-915e-406a-bca8-2b58fdc7c7ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.988206 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-kube-api-access-zfwll" (OuterVolumeSpecName: "kube-api-access-zfwll") pod "6cad99d4-915e-406a-bca8-2b58fdc7c7ac" (UID: "6cad99d4-915e-406a-bca8-2b58fdc7c7ac"). InnerVolumeSpecName "kube-api-access-zfwll". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.988327 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c132937c-20aa-47d7-903b-92a9ec65ba6f-kube-api-access-hlrgl" (OuterVolumeSpecName: "kube-api-access-hlrgl") pod "c132937c-20aa-47d7-903b-92a9ec65ba6f" (UID: "c132937c-20aa-47d7-903b-92a9ec65ba6f"). InnerVolumeSpecName "kube-api-access-hlrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.991833 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5624b850-5ca9-47d2-82e9-52bbc3829bc5" (UID: "5624b850-5ca9-47d2-82e9-52bbc3829bc5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:38:15 crc kubenswrapper[4817]: I0314 05:38:15.997653 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5624b850-5ca9-47d2-82e9-52bbc3829bc5-kube-api-access-xbd22" (OuterVolumeSpecName: "kube-api-access-xbd22") pod "5624b850-5ca9-47d2-82e9-52bbc3829bc5" (UID: "5624b850-5ca9-47d2-82e9-52bbc3829bc5"). InnerVolumeSpecName "kube-api-access-xbd22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.072874 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c132937c-20aa-47d7-903b-92a9ec65ba6f" (UID: "c132937c-20aa-47d7-903b-92a9ec65ba6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.085968 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfwll\" (UniqueName: \"kubernetes.io/projected/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-kube-api-access-zfwll\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086004 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlrgl\" (UniqueName: \"kubernetes.io/projected/c132937c-20aa-47d7-903b-92a9ec65ba6f-kube-api-access-hlrgl\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086019 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086037 4817 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086051 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbd22\" (UniqueName: \"kubernetes.io/projected/5624b850-5ca9-47d2-82e9-52bbc3829bc5-kube-api-access-xbd22\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086063 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c132937c-20aa-47d7-903b-92a9ec65ba6f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086076 4817 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5624b850-5ca9-47d2-82e9-52bbc3829bc5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.086087 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.124370 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cad99d4-915e-406a-bca8-2b58fdc7c7ac" (UID: "6cad99d4-915e-406a-bca8-2b58fdc7c7ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.187183 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cad99d4-915e-406a-bca8-2b58fdc7c7ac-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.316125 4817 generic.go:334] "Generic (PLEG): container finished" podID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerID="67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d" exitCode=0 Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.316184 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvkz" event={"ID":"3bf969ab-d18a-43ef-88be-3e1337f14b4d","Type":"ContainerDied","Data":"67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.316217 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvkz" event={"ID":"3bf969ab-d18a-43ef-88be-3e1337f14b4d","Type":"ContainerDied","Data":"36c5a516f3696346cf024872adf034ec268ab39db114312ddc91c4c9a18c5a79"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.316239 4817 scope.go:117] "RemoveContainer" containerID="67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.316243 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxvkz" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.325538 4817 generic.go:334] "Generic (PLEG): container finished" podID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerID="62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610" exitCode=0 Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.325617 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nwvm" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.325613 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerDied","Data":"62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.328344 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nwvm" event={"ID":"6cad99d4-915e-406a-bca8-2b58fdc7c7ac","Type":"ContainerDied","Data":"6276a2f38e7db28b5ecc46c00f4b8de6d268157c061b384138a8262767e0dd63"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.338331 4817 generic.go:334] "Generic (PLEG): container finished" podID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerID="e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066" exitCode=0 Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.338419 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n45mx" event={"ID":"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf","Type":"ContainerDied","Data":"e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.338454 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n45mx" event={"ID":"7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf","Type":"ContainerDied","Data":"4ba88faca6bc914d6a686758cf8523ad3f2440301219c1ca8ccaec18c52b5feb"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.338463 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n45mx" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.340935 4817 generic.go:334] "Generic (PLEG): container finished" podID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerID="41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524" exitCode=0 Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.340997 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" event={"ID":"5624b850-5ca9-47d2-82e9-52bbc3829bc5","Type":"ContainerDied","Data":"41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.341019 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" event={"ID":"5624b850-5ca9-47d2-82e9-52bbc3829bc5","Type":"ContainerDied","Data":"f0d4f9ca813ff045ea943578863d16ff2f953332210851e7c78fd41c2cce5b16"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.340970 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5cqlk" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.349964 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mxvkz"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.359178 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mxvkz"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.359529 4817 generic.go:334] "Generic (PLEG): container finished" podID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerID="d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a" exitCode=0 Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.359602 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbjqr" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.359588 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbjqr" event={"ID":"c132937c-20aa-47d7-903b-92a9ec65ba6f","Type":"ContainerDied","Data":"d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.359663 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbjqr" event={"ID":"c132937c-20aa-47d7-903b-92a9ec65ba6f","Type":"ContainerDied","Data":"2a4736c3188ba57082a0fb1c2ec253b8ea78164f11e869a73fb86b005c99e51d"} Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.363708 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtb86"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.366688 4817 scope.go:117] "RemoveContainer" containerID="e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.377260 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nwvm"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.380307 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nwvm"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.436425 4817 scope.go:117] "RemoveContainer" containerID="46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.464453 4817 scope.go:117] "RemoveContainer" containerID="67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.464855 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d\": container with ID starting with 67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d not found: ID does not exist" containerID="67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.464907 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d"} err="failed to get container status \"67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d\": rpc error: code = NotFound desc = could not find container \"67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d\": container with ID starting with 67a50d976580ca40a641f98cdafa5faf4159b8d7566708c94cc173240723847d not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.464943 4817 scope.go:117] "RemoveContainer" containerID="e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.465220 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d\": container with ID starting with e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d not found: ID does not exist" containerID="e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.465243 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d"} err="failed to get container status \"e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d\": rpc error: code = NotFound desc = could not find container \"e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d\": container with ID starting with e8b860211d3b695f08dd1b63915af4f2c3c86cac68ad26421fa2b09906f81e6d not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.465257 4817 scope.go:117] "RemoveContainer" containerID="46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.465635 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132\": container with ID starting with 46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132 not found: ID does not exist" containerID="46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.465661 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132"} err="failed to get container status \"46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132\": rpc error: code = NotFound desc = could not find container \"46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132\": container with ID starting with 46619572c87d3dce1c220545fc1e13e6e84e19d341e560088e321a7027fdc132 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.465682 4817 scope.go:117] "RemoveContainer" containerID="62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.494569 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n45mx"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.507089 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n45mx"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.516135 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5cqlk"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.516157 4817 scope.go:117] "RemoveContainer" containerID="a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.531954 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5cqlk"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.535628 4817 scope.go:117] "RemoveContainer" containerID="85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.536502 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbjqr"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.539157 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hbjqr"] Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.550808 4817 scope.go:117] "RemoveContainer" containerID="62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.551501 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610\": container with ID starting with 62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610 not found: ID does not exist" containerID="62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.551536 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610"} err="failed to get container status \"62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610\": rpc error: code = NotFound desc = could not find container \"62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610\": container with ID starting with 62b7ce297fbe7b186a613c45f7014b7f62bd2636a5dc9600975ebe73e2844610 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.551560 4817 scope.go:117] "RemoveContainer" containerID="a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.552008 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c\": container with ID starting with a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c not found: ID does not exist" containerID="a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.552070 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c"} err="failed to get container status \"a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c\": rpc error: code = NotFound desc = could not find container \"a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c\": container with ID starting with a8949ad2b06f5cf315124907bccf85f86961583fe951a2f368ec52bf1864db1c not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.552104 4817 scope.go:117] "RemoveContainer" containerID="85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.552372 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305\": container with ID starting with 85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305 not found: ID does not exist" containerID="85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.552395 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305"} err="failed to get container status \"85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305\": rpc error: code = NotFound desc = could not find container \"85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305\": container with ID starting with 85a710c19f0d02651b9d8c096f1df4e5831b7e24a2c342532c87ad65470db305 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.552410 4817 scope.go:117] "RemoveContainer" containerID="e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.563051 4817 scope.go:117] "RemoveContainer" containerID="9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.576004 4817 scope.go:117] "RemoveContainer" containerID="c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.586230 4817 scope.go:117] "RemoveContainer" containerID="e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.586513 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066\": container with ID starting with e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066 not found: ID does not exist" containerID="e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.586543 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066"} err="failed to get container status \"e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066\": rpc error: code = NotFound desc = could not find container \"e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066\": container with ID starting with e406bdc4696f7449136d9a2fbd2a0155d24be4314fd627b43f49278d9925c066 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.586566 4817 scope.go:117] "RemoveContainer" containerID="9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.586745 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434\": container with ID starting with 9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434 not found: ID does not exist" containerID="9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.586764 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434"} err="failed to get container status \"9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434\": rpc error: code = NotFound desc = could not find container \"9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434\": container with ID starting with 9d25e298bec9e24994c862895f7df332ea8e60f3747814abd41c337d47791434 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.586776 4817 scope.go:117] "RemoveContainer" containerID="c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.586969 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452\": container with ID starting with c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452 not found: ID does not exist" containerID="c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.586993 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452"} err="failed to get container status \"c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452\": rpc error: code = NotFound desc = could not find container \"c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452\": container with ID starting with c123cfce4de32fd45782decb88c7aede5c15b7a2a6243fe6eb7e5e37e213b452 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.587009 4817 scope.go:117] "RemoveContainer" containerID="41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.597416 4817 scope.go:117] "RemoveContainer" containerID="41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.597661 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524\": container with ID starting with 41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524 not found: ID does not exist" containerID="41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.597687 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524"} err="failed to get container status \"41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524\": rpc error: code = NotFound desc = could not find container \"41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524\": container with ID starting with 41e24d0315b493f8bf6fb64b5f7916b3cd0ae43461cfff6a1383e61b38a75524 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.597702 4817 scope.go:117] "RemoveContainer" containerID="d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.607553 4817 scope.go:117] "RemoveContainer" containerID="e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.621059 4817 scope.go:117] "RemoveContainer" containerID="2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.632466 4817 scope.go:117] "RemoveContainer" containerID="d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.632736 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a\": container with ID starting with d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a not found: ID does not exist" containerID="d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.632785 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a"} err="failed to get container status \"d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a\": rpc error: code = NotFound desc = could not find container \"d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a\": container with ID starting with d3f1399a04274abd4b6a94f5d3d0ccbc43c79fe5945311d7cbdbbc504a97d87a not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.632816 4817 scope.go:117] "RemoveContainer" containerID="e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.633098 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf\": container with ID starting with e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf not found: ID does not exist" containerID="e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.633125 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf"} err="failed to get container status \"e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf\": rpc error: code = NotFound desc = could not find container \"e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf\": container with ID starting with e8c4e9fa2eba865ad7f187ed135930986eb06e0850f81e7eb6e0efad541a87bf not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.633142 4817 scope.go:117] "RemoveContainer" containerID="2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1" Mar 14 05:38:16 crc kubenswrapper[4817]: E0314 05:38:16.633326 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1\": container with ID starting with 2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1 not found: ID does not exist" containerID="2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.633346 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1"} err="failed to get container status \"2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1\": rpc error: code = NotFound desc = could not find container \"2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1\": container with ID starting with 2bff9833e1c5ebbbe1fa09c92e1b07a01066dd3e8e82a2160e925e2ec2bd99a1 not found: ID does not exist" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.740454 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" path="/var/lib/kubelet/pods/3bf969ab-d18a-43ef-88be-3e1337f14b4d/volumes" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.741287 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" path="/var/lib/kubelet/pods/5624b850-5ca9-47d2-82e9-52bbc3829bc5/volumes" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.741924 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" path="/var/lib/kubelet/pods/6cad99d4-915e-406a-bca8-2b58fdc7c7ac/volumes" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.743259 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" path="/var/lib/kubelet/pods/7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf/volumes" Mar 14 05:38:16 crc kubenswrapper[4817]: I0314 05:38:16.743982 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" path="/var/lib/kubelet/pods/c132937c-20aa-47d7-903b-92a9ec65ba6f/volumes" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.372125 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" event={"ID":"22e59375-f50e-4050-aeeb-a305ffcb3572","Type":"ContainerStarted","Data":"d7bc8dd52a4ab4988a2173dcc819a4b7a9ed86a999a0f2dff0f97e14719fb23b"} Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.372589 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" event={"ID":"22e59375-f50e-4050-aeeb-a305ffcb3572","Type":"ContainerStarted","Data":"4dd65775d2a3b57bfb0bfaea2aec19e0567a0862160132aa3cd66f6c2d52a830"} Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.373356 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.376139 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.396849 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" podStartSLOduration=2.3968199070000002 podStartE2EDuration="2.396819907s" podCreationTimestamp="2026-03-14 05:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:38:17.392194962 +0000 UTC m=+351.430455708" watchObservedRunningTime="2026-03-14 05:38:17.396819907 +0000 UTC m=+351.435080683" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.488927 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bb7k9"] Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489238 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489254 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489267 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489276 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489289 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489298 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489307 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489314 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489326 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489338 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489349 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489356 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489365 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489372 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489386 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489394 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489408 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489414 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489424 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489431 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489444 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489456 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489466 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489473 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="extract-content" Mar 14 05:38:17 crc kubenswrapper[4817]: E0314 05:38:17.489481 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489488 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="extract-utilities" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489596 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c132937c-20aa-47d7-903b-92a9ec65ba6f" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489610 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cad99d4-915e-406a-bca8-2b58fdc7c7ac" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489622 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5624b850-5ca9-47d2-82e9-52bbc3829bc5" containerName="marketplace-operator" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489634 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf969ab-d18a-43ef-88be-3e1337f14b4d" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.489642 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7688f5a4-4e6a-4c85-bed9-26fdd61f4cbf" containerName="registry-server" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.490592 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.493984 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.502245 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb7k9"] Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.624948 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28741a2a-08fb-480b-8e68-91df7ddee923-utilities\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.625025 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28741a2a-08fb-480b-8e68-91df7ddee923-catalog-content\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.625231 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmtq\" (UniqueName: \"kubernetes.io/projected/28741a2a-08fb-480b-8e68-91df7ddee923-kube-api-access-fdmtq\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.670588 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zvghb"] Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.671656 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.673299 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.679105 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zvghb"] Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.726369 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdmtq\" (UniqueName: \"kubernetes.io/projected/28741a2a-08fb-480b-8e68-91df7ddee923-kube-api-access-fdmtq\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.726457 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28741a2a-08fb-480b-8e68-91df7ddee923-utilities\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.726498 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28741a2a-08fb-480b-8e68-91df7ddee923-catalog-content\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.727070 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28741a2a-08fb-480b-8e68-91df7ddee923-utilities\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.727250 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28741a2a-08fb-480b-8e68-91df7ddee923-catalog-content\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.746126 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdmtq\" (UniqueName: \"kubernetes.io/projected/28741a2a-08fb-480b-8e68-91df7ddee923-kube-api-access-fdmtq\") pod \"redhat-marketplace-bb7k9\" (UID: \"28741a2a-08fb-480b-8e68-91df7ddee923\") " pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.805661 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.828002 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv486\" (UniqueName: \"kubernetes.io/projected/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-kube-api-access-hv486\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.828143 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-utilities\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.828271 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-catalog-content\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.928953 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv486\" (UniqueName: \"kubernetes.io/projected/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-kube-api-access-hv486\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.929295 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-utilities\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.929340 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-catalog-content\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.929834 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-catalog-content\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.930005 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-utilities\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.958606 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv486\" (UniqueName: \"kubernetes.io/projected/3b88a2c6-2091-4fe2-b635-16f4c1133ea7-kube-api-access-hv486\") pod \"certified-operators-zvghb\" (UID: \"3b88a2c6-2091-4fe2-b635-16f4c1133ea7\") " pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:17 crc kubenswrapper[4817]: I0314 05:38:17.988548 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:18 crc kubenswrapper[4817]: I0314 05:38:18.254915 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bb7k9"] Mar 14 05:38:18 crc kubenswrapper[4817]: W0314 05:38:18.259782 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28741a2a_08fb_480b_8e68_91df7ddee923.slice/crio-ad061276c19b4af63ebab9b419c67dd936442431b3db16a405b9749f78de3805 WatchSource:0}: Error finding container ad061276c19b4af63ebab9b419c67dd936442431b3db16a405b9749f78de3805: Status 404 returned error can't find the container with id ad061276c19b4af63ebab9b419c67dd936442431b3db16a405b9749f78de3805 Mar 14 05:38:18 crc kubenswrapper[4817]: I0314 05:38:18.390481 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb7k9" event={"ID":"28741a2a-08fb-480b-8e68-91df7ddee923","Type":"ContainerStarted","Data":"da1696cfedfa7e977f8a5eaba222bf3583fc08b69734148343b627a8d4e67aa7"} Mar 14 05:38:18 crc kubenswrapper[4817]: I0314 05:38:18.390541 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb7k9" event={"ID":"28741a2a-08fb-480b-8e68-91df7ddee923","Type":"ContainerStarted","Data":"ad061276c19b4af63ebab9b419c67dd936442431b3db16a405b9749f78de3805"} Mar 14 05:38:18 crc kubenswrapper[4817]: I0314 05:38:18.405165 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zvghb"] Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.398218 4817 generic.go:334] "Generic (PLEG): container finished" podID="28741a2a-08fb-480b-8e68-91df7ddee923" containerID="da1696cfedfa7e977f8a5eaba222bf3583fc08b69734148343b627a8d4e67aa7" exitCode=0 Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.398324 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb7k9" event={"ID":"28741a2a-08fb-480b-8e68-91df7ddee923","Type":"ContainerDied","Data":"da1696cfedfa7e977f8a5eaba222bf3583fc08b69734148343b627a8d4e67aa7"} Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.401805 4817 generic.go:334] "Generic (PLEG): container finished" podID="3b88a2c6-2091-4fe2-b635-16f4c1133ea7" containerID="c531bd981b160f31f6964640561f89d2a9fcd0586854132d4a542b5d12b4a360" exitCode=0 Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.401962 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvghb" event={"ID":"3b88a2c6-2091-4fe2-b635-16f4c1133ea7","Type":"ContainerDied","Data":"c531bd981b160f31f6964640561f89d2a9fcd0586854132d4a542b5d12b4a360"} Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.402054 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvghb" event={"ID":"3b88a2c6-2091-4fe2-b635-16f4c1133ea7","Type":"ContainerStarted","Data":"a87e55f2b959ee44b61b1f7e0999e3efc1de3dcd898efe1a0a127374256af0b0"} Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.875355 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8vr9"] Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.876675 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.879305 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.883649 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8vr9"] Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.969842 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57c287ae-7267-4b96-b901-70a4171a6747-utilities\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.969981 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57c287ae-7267-4b96-b901-70a4171a6747-catalog-content\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:19 crc kubenswrapper[4817]: I0314 05:38:19.970017 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccs4\" (UniqueName: \"kubernetes.io/projected/57c287ae-7267-4b96-b901-70a4171a6747-kube-api-access-mccs4\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.068935 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wm5jk"] Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.071712 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57c287ae-7267-4b96-b901-70a4171a6747-catalog-content\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.071759 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mccs4\" (UniqueName: \"kubernetes.io/projected/57c287ae-7267-4b96-b901-70a4171a6747-kube-api-access-mccs4\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.071839 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57c287ae-7267-4b96-b901-70a4171a6747-utilities\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.071978 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.072388 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57c287ae-7267-4b96-b901-70a4171a6747-catalog-content\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.072407 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57c287ae-7267-4b96-b901-70a4171a6747-utilities\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.074790 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wm5jk"] Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.076463 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.101035 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mccs4\" (UniqueName: \"kubernetes.io/projected/57c287ae-7267-4b96-b901-70a4171a6747-kube-api-access-mccs4\") pod \"redhat-operators-l8vr9\" (UID: \"57c287ae-7267-4b96-b901-70a4171a6747\") " pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.173155 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-utilities\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.173220 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-catalog-content\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.173246 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5ww\" (UniqueName: \"kubernetes.io/projected/5b9c285c-e272-4976-b90a-cbca8c3c1c28-kube-api-access-9h5ww\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.208538 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.274535 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-utilities\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.274600 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-catalog-content\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.274624 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5ww\" (UniqueName: \"kubernetes.io/projected/5b9c285c-e272-4976-b90a-cbca8c3c1c28-kube-api-access-9h5ww\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.274989 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-utilities\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.275335 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-catalog-content\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.293792 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5ww\" (UniqueName: \"kubernetes.io/projected/5b9c285c-e272-4976-b90a-cbca8c3c1c28-kube-api-access-9h5ww\") pod \"community-operators-wm5jk\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.390725 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.417926 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb7k9" event={"ID":"28741a2a-08fb-480b-8e68-91df7ddee923","Type":"ContainerStarted","Data":"f2c431e0d59b135e4b05e9b22d4da7e352be32d8a8a7d7baf7ae6bcf4e9d4f02"} Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.421228 4817 generic.go:334] "Generic (PLEG): container finished" podID="3b88a2c6-2091-4fe2-b635-16f4c1133ea7" containerID="09cd8a74807620e07d2d01545185c8c7b2de1956bd4ea237f775794f5495e981" exitCode=0 Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.421296 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvghb" event={"ID":"3b88a2c6-2091-4fe2-b635-16f4c1133ea7","Type":"ContainerDied","Data":"09cd8a74807620e07d2d01545185c8c7b2de1956bd4ea237f775794f5495e981"} Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.421617 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8vr9"] Mar 14 05:38:20 crc kubenswrapper[4817]: W0314 05:38:20.511120 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57c287ae_7267_4b96_b901_70a4171a6747.slice/crio-88319953d48778161830dc07f3cebc5fac7247700f47864df8382a256730596f WatchSource:0}: Error finding container 88319953d48778161830dc07f3cebc5fac7247700f47864df8382a256730596f: Status 404 returned error can't find the container with id 88319953d48778161830dc07f3cebc5fac7247700f47864df8382a256730596f Mar 14 05:38:20 crc kubenswrapper[4817]: I0314 05:38:20.810043 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wm5jk"] Mar 14 05:38:20 crc kubenswrapper[4817]: W0314 05:38:20.811315 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9c285c_e272_4976_b90a_cbca8c3c1c28.slice/crio-6b977ab09b72f1d73d6b62c664f882f1995cb61bb886fcae6f8c20b66e4a25f3 WatchSource:0}: Error finding container 6b977ab09b72f1d73d6b62c664f882f1995cb61bb886fcae6f8c20b66e4a25f3: Status 404 returned error can't find the container with id 6b977ab09b72f1d73d6b62c664f882f1995cb61bb886fcae6f8c20b66e4a25f3 Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.427966 4817 generic.go:334] "Generic (PLEG): container finished" podID="28741a2a-08fb-480b-8e68-91df7ddee923" containerID="f2c431e0d59b135e4b05e9b22d4da7e352be32d8a8a7d7baf7ae6bcf4e9d4f02" exitCode=0 Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.428051 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb7k9" event={"ID":"28741a2a-08fb-480b-8e68-91df7ddee923","Type":"ContainerDied","Data":"f2c431e0d59b135e4b05e9b22d4da7e352be32d8a8a7d7baf7ae6bcf4e9d4f02"} Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.434821 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zvghb" event={"ID":"3b88a2c6-2091-4fe2-b635-16f4c1133ea7","Type":"ContainerStarted","Data":"fd97edd649a22e7ca18aec637eeb53be5e82e8e6a833d5b845ebbf1ca4908e6a"} Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.437339 4817 generic.go:334] "Generic (PLEG): container finished" podID="57c287ae-7267-4b96-b901-70a4171a6747" containerID="e06529c11482177d47408e0225e4d8c9f4e0f132d4021a30dd4335a27ae93568" exitCode=0 Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.437399 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8vr9" event={"ID":"57c287ae-7267-4b96-b901-70a4171a6747","Type":"ContainerDied","Data":"e06529c11482177d47408e0225e4d8c9f4e0f132d4021a30dd4335a27ae93568"} Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.437414 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8vr9" event={"ID":"57c287ae-7267-4b96-b901-70a4171a6747","Type":"ContainerStarted","Data":"88319953d48778161830dc07f3cebc5fac7247700f47864df8382a256730596f"} Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.440133 4817 generic.go:334] "Generic (PLEG): container finished" podID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerID="567d26a9ce8bd511b8fff064b3cc7962e1137324896f3ae196b5ebacefd89892" exitCode=0 Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.440171 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm5jk" event={"ID":"5b9c285c-e272-4976-b90a-cbca8c3c1c28","Type":"ContainerDied","Data":"567d26a9ce8bd511b8fff064b3cc7962e1137324896f3ae196b5ebacefd89892"} Mar 14 05:38:21 crc kubenswrapper[4817]: I0314 05:38:21.440194 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm5jk" event={"ID":"5b9c285c-e272-4976-b90a-cbca8c3c1c28","Type":"ContainerStarted","Data":"6b977ab09b72f1d73d6b62c664f882f1995cb61bb886fcae6f8c20b66e4a25f3"} Mar 14 05:38:22 crc kubenswrapper[4817]: I0314 05:38:22.449025 4817 generic.go:334] "Generic (PLEG): container finished" podID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerID="d9146b43fc22b393fef21c40017b8713146ae509e0b9bca2ec3ff7c8878058a8" exitCode=0 Mar 14 05:38:22 crc kubenswrapper[4817]: I0314 05:38:22.449225 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm5jk" event={"ID":"5b9c285c-e272-4976-b90a-cbca8c3c1c28","Type":"ContainerDied","Data":"d9146b43fc22b393fef21c40017b8713146ae509e0b9bca2ec3ff7c8878058a8"} Mar 14 05:38:22 crc kubenswrapper[4817]: I0314 05:38:22.454342 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bb7k9" event={"ID":"28741a2a-08fb-480b-8e68-91df7ddee923","Type":"ContainerStarted","Data":"387bb15031a002bfa3da73a5edbc429d6b3b74f54bf4a34ff6f7b2a504010e85"} Mar 14 05:38:22 crc kubenswrapper[4817]: I0314 05:38:22.457293 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8vr9" event={"ID":"57c287ae-7267-4b96-b901-70a4171a6747","Type":"ContainerStarted","Data":"0afdfb4e554a9d08fe84de38499da913bf40ac5092c844e88f1f2063c22945d1"} Mar 14 05:38:22 crc kubenswrapper[4817]: I0314 05:38:22.486373 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zvghb" podStartSLOduration=4.030354838 podStartE2EDuration="5.486351649s" podCreationTimestamp="2026-03-14 05:38:17 +0000 UTC" firstStartedPulling="2026-03-14 05:38:19.402948537 +0000 UTC m=+353.441209283" lastFinishedPulling="2026-03-14 05:38:20.858945338 +0000 UTC m=+354.897206094" observedRunningTime="2026-03-14 05:38:21.502337244 +0000 UTC m=+355.540598020" watchObservedRunningTime="2026-03-14 05:38:22.486351649 +0000 UTC m=+356.524612405" Mar 14 05:38:22 crc kubenswrapper[4817]: I0314 05:38:22.525483 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bb7k9" podStartSLOduration=3.080172419 podStartE2EDuration="5.525466318s" podCreationTimestamp="2026-03-14 05:38:17 +0000 UTC" firstStartedPulling="2026-03-14 05:38:19.399862697 +0000 UTC m=+353.438123433" lastFinishedPulling="2026-03-14 05:38:21.845156586 +0000 UTC m=+355.883417332" observedRunningTime="2026-03-14 05:38:22.523096969 +0000 UTC m=+356.561357715" watchObservedRunningTime="2026-03-14 05:38:22.525466318 +0000 UTC m=+356.563727054" Mar 14 05:38:23 crc kubenswrapper[4817]: I0314 05:38:23.464334 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm5jk" event={"ID":"5b9c285c-e272-4976-b90a-cbca8c3c1c28","Type":"ContainerStarted","Data":"8a658da2278dcbf320365cfd4199aa58534c65c5a4827492fa7037b23195b5eb"} Mar 14 05:38:23 crc kubenswrapper[4817]: I0314 05:38:23.465940 4817 generic.go:334] "Generic (PLEG): container finished" podID="57c287ae-7267-4b96-b901-70a4171a6747" containerID="0afdfb4e554a9d08fe84de38499da913bf40ac5092c844e88f1f2063c22945d1" exitCode=0 Mar 14 05:38:23 crc kubenswrapper[4817]: I0314 05:38:23.466190 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8vr9" event={"ID":"57c287ae-7267-4b96-b901-70a4171a6747","Type":"ContainerDied","Data":"0afdfb4e554a9d08fe84de38499da913bf40ac5092c844e88f1f2063c22945d1"} Mar 14 05:38:23 crc kubenswrapper[4817]: I0314 05:38:23.487338 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wm5jk" podStartSLOduration=1.9793022439999999 podStartE2EDuration="3.487296468s" podCreationTimestamp="2026-03-14 05:38:20 +0000 UTC" firstStartedPulling="2026-03-14 05:38:21.442757359 +0000 UTC m=+355.481018115" lastFinishedPulling="2026-03-14 05:38:22.950751593 +0000 UTC m=+356.989012339" observedRunningTime="2026-03-14 05:38:23.483146557 +0000 UTC m=+357.521407323" watchObservedRunningTime="2026-03-14 05:38:23.487296468 +0000 UTC m=+357.525557214" Mar 14 05:38:24 crc kubenswrapper[4817]: I0314 05:38:24.474429 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8vr9" event={"ID":"57c287ae-7267-4b96-b901-70a4171a6747","Type":"ContainerStarted","Data":"58dadc13d3b85518e8bccd2eab3e35d32f319408900aeb8adda96e022f085d52"} Mar 14 05:38:25 crc kubenswrapper[4817]: I0314 05:38:25.955547 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dmw62" Mar 14 05:38:25 crc kubenswrapper[4817]: I0314 05:38:25.974873 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8vr9" podStartSLOduration=4.56462422 podStartE2EDuration="6.974856608s" podCreationTimestamp="2026-03-14 05:38:19 +0000 UTC" firstStartedPulling="2026-03-14 05:38:21.438450164 +0000 UTC m=+355.476710920" lastFinishedPulling="2026-03-14 05:38:23.848682562 +0000 UTC m=+357.886943308" observedRunningTime="2026-03-14 05:38:24.510185156 +0000 UTC m=+358.548445902" watchObservedRunningTime="2026-03-14 05:38:25.974856608 +0000 UTC m=+360.013117344" Mar 14 05:38:26 crc kubenswrapper[4817]: I0314 05:38:26.011135 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lcsh2"] Mar 14 05:38:27 crc kubenswrapper[4817]: I0314 05:38:27.806138 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:27 crc kubenswrapper[4817]: I0314 05:38:27.806217 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:27 crc kubenswrapper[4817]: I0314 05:38:27.861220 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:27 crc kubenswrapper[4817]: I0314 05:38:27.988939 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:27 crc kubenswrapper[4817]: I0314 05:38:27.989187 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:28 crc kubenswrapper[4817]: I0314 05:38:28.023493 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:28 crc kubenswrapper[4817]: I0314 05:38:28.548834 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zvghb" Mar 14 05:38:28 crc kubenswrapper[4817]: I0314 05:38:28.555623 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bb7k9" Mar 14 05:38:30 crc kubenswrapper[4817]: I0314 05:38:30.209300 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:30 crc kubenswrapper[4817]: I0314 05:38:30.209715 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:30 crc kubenswrapper[4817]: I0314 05:38:30.391305 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:30 crc kubenswrapper[4817]: I0314 05:38:30.391364 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:30 crc kubenswrapper[4817]: I0314 05:38:30.426973 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:30 crc kubenswrapper[4817]: I0314 05:38:30.555641 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 05:38:31 crc kubenswrapper[4817]: I0314 05:38:31.258497 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l8vr9" podUID="57c287ae-7267-4b96-b901-70a4171a6747" containerName="registry-server" probeResult="failure" output=< Mar 14 05:38:31 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:38:31 crc kubenswrapper[4817]: > Mar 14 05:38:40 crc kubenswrapper[4817]: I0314 05:38:40.254726 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:40 crc kubenswrapper[4817]: I0314 05:38:40.301630 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8vr9" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.048527 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" podUID="e1b60a8f-12a6-4129-9b96-2b69e788111b" containerName="registry" containerID="cri-o://1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4" gracePeriod=30 Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.463540 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.610855 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-tls\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.610933 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-certificates\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.610961 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7p56\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-kube-api-access-b7p56\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.610987 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-trusted-ca\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.611294 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.611325 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1b60a8f-12a6-4129-9b96-2b69e788111b-ca-trust-extracted\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.611359 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-bound-sa-token\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.611387 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1b60a8f-12a6-4129-9b96-2b69e788111b-installation-pull-secrets\") pod \"e1b60a8f-12a6-4129-9b96-2b69e788111b\" (UID: \"e1b60a8f-12a6-4129-9b96-2b69e788111b\") " Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.613101 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.613193 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.624594 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b60a8f-12a6-4129-9b96-2b69e788111b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.624833 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.625372 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.631500 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.631635 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1b60a8f-12a6-4129-9b96-2b69e788111b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.631792 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-kube-api-access-b7p56" (OuterVolumeSpecName: "kube-api-access-b7p56") pod "e1b60a8f-12a6-4129-9b96-2b69e788111b" (UID: "e1b60a8f-12a6-4129-9b96-2b69e788111b"). InnerVolumeSpecName "kube-api-access-b7p56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.638501 4817 generic.go:334] "Generic (PLEG): container finished" podID="e1b60a8f-12a6-4129-9b96-2b69e788111b" containerID="1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4" exitCode=0 Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.638610 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.638594 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" event={"ID":"e1b60a8f-12a6-4129-9b96-2b69e788111b","Type":"ContainerDied","Data":"1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4"} Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.638770 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lcsh2" event={"ID":"e1b60a8f-12a6-4129-9b96-2b69e788111b","Type":"ContainerDied","Data":"898d7f0e8195b06c4864c569271f5ad8825e72250a0a0558c29460bb12e079f1"} Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.638814 4817 scope.go:117] "RemoveContainer" containerID="1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.668360 4817 scope.go:117] "RemoveContainer" containerID="1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4" Mar 14 05:38:51 crc kubenswrapper[4817]: E0314 05:38:51.669314 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4\": container with ID starting with 1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4 not found: ID does not exist" containerID="1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.669364 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4"} err="failed to get container status \"1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4\": rpc error: code = NotFound desc = could not find container \"1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4\": container with ID starting with 1432c4432d76a869c8efd1d94187d2e7a3ab84d47d8a05e691d0c4c03a5c84c4 not found: ID does not exist" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.685161 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lcsh2"] Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.691970 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lcsh2"] Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713064 4817 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713110 4817 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1b60a8f-12a6-4129-9b96-2b69e788111b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713123 4817 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713134 4817 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713145 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7p56\" (UniqueName: \"kubernetes.io/projected/e1b60a8f-12a6-4129-9b96-2b69e788111b-kube-api-access-b7p56\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713164 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1b60a8f-12a6-4129-9b96-2b69e788111b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:51 crc kubenswrapper[4817]: I0314 05:38:51.713173 4817 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1b60a8f-12a6-4129-9b96-2b69e788111b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 05:38:52 crc kubenswrapper[4817]: I0314 05:38:52.740291 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b60a8f-12a6-4129-9b96-2b69e788111b" path="/var/lib/kubelet/pods/e1b60a8f-12a6-4129-9b96-2b69e788111b/volumes" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.140271 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557780-9r62k"] Mar 14 05:40:00 crc kubenswrapper[4817]: E0314 05:40:00.141383 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b60a8f-12a6-4129-9b96-2b69e788111b" containerName="registry" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.141409 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b60a8f-12a6-4129-9b96-2b69e788111b" containerName="registry" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.141597 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b60a8f-12a6-4129-9b96-2b69e788111b" containerName="registry" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.142176 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.195123 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.195667 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.196398 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.198227 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557780-9r62k"] Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.299810 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zkwj\" (UniqueName: \"kubernetes.io/projected/66075180-911b-408b-95ff-2f74c34580be-kube-api-access-5zkwj\") pod \"auto-csr-approver-29557780-9r62k\" (UID: \"66075180-911b-408b-95ff-2f74c34580be\") " pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.401267 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zkwj\" (UniqueName: \"kubernetes.io/projected/66075180-911b-408b-95ff-2f74c34580be-kube-api-access-5zkwj\") pod \"auto-csr-approver-29557780-9r62k\" (UID: \"66075180-911b-408b-95ff-2f74c34580be\") " pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:00 crc kubenswrapper[4817]: I0314 05:40:00.437221 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zkwj\" (UniqueName: \"kubernetes.io/projected/66075180-911b-408b-95ff-2f74c34580be-kube-api-access-5zkwj\") pod \"auto-csr-approver-29557780-9r62k\" (UID: \"66075180-911b-408b-95ff-2f74c34580be\") " pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:01 crc kubenswrapper[4817]: I0314 05:40:01.042258 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:01 crc kubenswrapper[4817]: I0314 05:40:01.228763 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557780-9r62k"] Mar 14 05:40:01 crc kubenswrapper[4817]: I0314 05:40:01.235749 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 05:40:01 crc kubenswrapper[4817]: I0314 05:40:01.574999 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557780-9r62k" event={"ID":"66075180-911b-408b-95ff-2f74c34580be","Type":"ContainerStarted","Data":"a82c928fa105b2d7df07e3e983b6a85c172b0fac0ad092ba5f37635ae45eaf4e"} Mar 14 05:40:03 crc kubenswrapper[4817]: I0314 05:40:03.587127 4817 generic.go:334] "Generic (PLEG): container finished" podID="66075180-911b-408b-95ff-2f74c34580be" containerID="4783521c5b9fcad1241d3ba69b6058fd186d7ddcf7cc58e7a7b76e72873daa9e" exitCode=0 Mar 14 05:40:03 crc kubenswrapper[4817]: I0314 05:40:03.587195 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557780-9r62k" event={"ID":"66075180-911b-408b-95ff-2f74c34580be","Type":"ContainerDied","Data":"4783521c5b9fcad1241d3ba69b6058fd186d7ddcf7cc58e7a7b76e72873daa9e"} Mar 14 05:40:04 crc kubenswrapper[4817]: I0314 05:40:04.818275 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:04 crc kubenswrapper[4817]: I0314 05:40:04.879100 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zkwj\" (UniqueName: \"kubernetes.io/projected/66075180-911b-408b-95ff-2f74c34580be-kube-api-access-5zkwj\") pod \"66075180-911b-408b-95ff-2f74c34580be\" (UID: \"66075180-911b-408b-95ff-2f74c34580be\") " Mar 14 05:40:04 crc kubenswrapper[4817]: I0314 05:40:04.884457 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66075180-911b-408b-95ff-2f74c34580be-kube-api-access-5zkwj" (OuterVolumeSpecName: "kube-api-access-5zkwj") pod "66075180-911b-408b-95ff-2f74c34580be" (UID: "66075180-911b-408b-95ff-2f74c34580be"). InnerVolumeSpecName "kube-api-access-5zkwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:40:04 crc kubenswrapper[4817]: I0314 05:40:04.980800 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zkwj\" (UniqueName: \"kubernetes.io/projected/66075180-911b-408b-95ff-2f74c34580be-kube-api-access-5zkwj\") on node \"crc\" DevicePath \"\"" Mar 14 05:40:05 crc kubenswrapper[4817]: I0314 05:40:05.603113 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557780-9r62k" event={"ID":"66075180-911b-408b-95ff-2f74c34580be","Type":"ContainerDied","Data":"a82c928fa105b2d7df07e3e983b6a85c172b0fac0ad092ba5f37635ae45eaf4e"} Mar 14 05:40:05 crc kubenswrapper[4817]: I0314 05:40:05.603160 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a82c928fa105b2d7df07e3e983b6a85c172b0fac0ad092ba5f37635ae45eaf4e" Mar 14 05:40:05 crc kubenswrapper[4817]: I0314 05:40:05.603209 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557780-9r62k" Mar 14 05:40:05 crc kubenswrapper[4817]: I0314 05:40:05.877410 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557774-2vhw8"] Mar 14 05:40:05 crc kubenswrapper[4817]: I0314 05:40:05.882496 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557774-2vhw8"] Mar 14 05:40:06 crc kubenswrapper[4817]: I0314 05:40:06.770885 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b35f8ad5-461a-4c6c-aba1-56b3358990f8" path="/var/lib/kubelet/pods/b35f8ad5-461a-4c6c-aba1-56b3358990f8/volumes" Mar 14 05:40:08 crc kubenswrapper[4817]: I0314 05:40:08.565585 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:40:08 crc kubenswrapper[4817]: I0314 05:40:08.565680 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:40:38 crc kubenswrapper[4817]: I0314 05:40:38.565292 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:40:38 crc kubenswrapper[4817]: I0314 05:40:38.565920 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.565294 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.565836 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.565888 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.566549 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e635f45e2fa41c4eef67a52269f590c74e82163f2975e7047a02129a72dd1f8"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.566604 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://7e635f45e2fa41c4eef67a52269f590c74e82163f2975e7047a02129a72dd1f8" gracePeriod=600 Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.796416 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="7e635f45e2fa41c4eef67a52269f590c74e82163f2975e7047a02129a72dd1f8" exitCode=0 Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.796501 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"7e635f45e2fa41c4eef67a52269f590c74e82163f2975e7047a02129a72dd1f8"} Mar 14 05:41:08 crc kubenswrapper[4817]: I0314 05:41:08.796844 4817 scope.go:117] "RemoveContainer" containerID="602f8b7fe8f67305b6072961c01e7731ff09d2e6a4eb84319ec33e6705c0edda" Mar 14 05:41:09 crc kubenswrapper[4817]: I0314 05:41:09.805751 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"c7f1199d020b47f75e517c784668558faa68dcbb94d53dd88ba20907f7920ea4"} Mar 14 05:41:53 crc kubenswrapper[4817]: I0314 05:41:53.230966 4817 scope.go:117] "RemoveContainer" containerID="75fb71a5bb1018a5ae6aee27017f19c9cf758b921a168f31b464fa1c07dab16c" Mar 14 05:41:53 crc kubenswrapper[4817]: I0314 05:41:53.287086 4817 scope.go:117] "RemoveContainer" containerID="e5bc1099a3efa7d6d7c571da434dcb93e08529e4137d37d4997a0b9290d62ff1" Mar 14 05:41:53 crc kubenswrapper[4817]: I0314 05:41:53.308938 4817 scope.go:117] "RemoveContainer" containerID="f0983187f5a579e2af693a77190b03f9ee933f44dedb3dd03e5e5bbf7d3a8fca" Mar 14 05:41:53 crc kubenswrapper[4817]: I0314 05:41:53.332394 4817 scope.go:117] "RemoveContainer" containerID="775ddda9e523a6b7f667de602956d605ad162e117a6539e1ee8f786f073f3328" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.133833 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557782-fn94w"] Mar 14 05:42:00 crc kubenswrapper[4817]: E0314 05:42:00.134630 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66075180-911b-408b-95ff-2f74c34580be" containerName="oc" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.134646 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="66075180-911b-408b-95ff-2f74c34580be" containerName="oc" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.134752 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="66075180-911b-408b-95ff-2f74c34580be" containerName="oc" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.135207 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.138821 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.138959 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.138971 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.144363 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557782-fn94w"] Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.304051 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pdm\" (UniqueName: \"kubernetes.io/projected/a8db7b14-eae0-483f-b839-00ac2e6fc47d-kube-api-access-j8pdm\") pod \"auto-csr-approver-29557782-fn94w\" (UID: \"a8db7b14-eae0-483f-b839-00ac2e6fc47d\") " pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.405051 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8pdm\" (UniqueName: \"kubernetes.io/projected/a8db7b14-eae0-483f-b839-00ac2e6fc47d-kube-api-access-j8pdm\") pod \"auto-csr-approver-29557782-fn94w\" (UID: \"a8db7b14-eae0-483f-b839-00ac2e6fc47d\") " pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.424741 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8pdm\" (UniqueName: \"kubernetes.io/projected/a8db7b14-eae0-483f-b839-00ac2e6fc47d-kube-api-access-j8pdm\") pod \"auto-csr-approver-29557782-fn94w\" (UID: \"a8db7b14-eae0-483f-b839-00ac2e6fc47d\") " pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.458244 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:00 crc kubenswrapper[4817]: I0314 05:42:00.655127 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557782-fn94w"] Mar 14 05:42:01 crc kubenswrapper[4817]: I0314 05:42:01.191118 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557782-fn94w" event={"ID":"a8db7b14-eae0-483f-b839-00ac2e6fc47d","Type":"ContainerStarted","Data":"bc5383940250a415ec375b7629cbbb1c420f78ada186991e7922bb76805dcb9b"} Mar 14 05:42:02 crc kubenswrapper[4817]: I0314 05:42:02.198769 4817 generic.go:334] "Generic (PLEG): container finished" podID="a8db7b14-eae0-483f-b839-00ac2e6fc47d" containerID="9067522da97e5affd5a1eb706f5b73f0e8ce8293d7c314df4a432969b939d4ab" exitCode=0 Mar 14 05:42:02 crc kubenswrapper[4817]: I0314 05:42:02.198868 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557782-fn94w" event={"ID":"a8db7b14-eae0-483f-b839-00ac2e6fc47d","Type":"ContainerDied","Data":"9067522da97e5affd5a1eb706f5b73f0e8ce8293d7c314df4a432969b939d4ab"} Mar 14 05:42:03 crc kubenswrapper[4817]: I0314 05:42:03.403569 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:03 crc kubenswrapper[4817]: I0314 05:42:03.546780 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8pdm\" (UniqueName: \"kubernetes.io/projected/a8db7b14-eae0-483f-b839-00ac2e6fc47d-kube-api-access-j8pdm\") pod \"a8db7b14-eae0-483f-b839-00ac2e6fc47d\" (UID: \"a8db7b14-eae0-483f-b839-00ac2e6fc47d\") " Mar 14 05:42:03 crc kubenswrapper[4817]: I0314 05:42:03.555323 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8db7b14-eae0-483f-b839-00ac2e6fc47d-kube-api-access-j8pdm" (OuterVolumeSpecName: "kube-api-access-j8pdm") pod "a8db7b14-eae0-483f-b839-00ac2e6fc47d" (UID: "a8db7b14-eae0-483f-b839-00ac2e6fc47d"). InnerVolumeSpecName "kube-api-access-j8pdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:42:03 crc kubenswrapper[4817]: I0314 05:42:03.648251 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8pdm\" (UniqueName: \"kubernetes.io/projected/a8db7b14-eae0-483f-b839-00ac2e6fc47d-kube-api-access-j8pdm\") on node \"crc\" DevicePath \"\"" Mar 14 05:42:04 crc kubenswrapper[4817]: I0314 05:42:04.212844 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557782-fn94w" event={"ID":"a8db7b14-eae0-483f-b839-00ac2e6fc47d","Type":"ContainerDied","Data":"bc5383940250a415ec375b7629cbbb1c420f78ada186991e7922bb76805dcb9b"} Mar 14 05:42:04 crc kubenswrapper[4817]: I0314 05:42:04.212924 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5383940250a415ec375b7629cbbb1c420f78ada186991e7922bb76805dcb9b" Mar 14 05:42:04 crc kubenswrapper[4817]: I0314 05:42:04.212967 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557782-fn94w" Mar 14 05:42:04 crc kubenswrapper[4817]: I0314 05:42:04.453479 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557776-ch6qq"] Mar 14 05:42:04 crc kubenswrapper[4817]: I0314 05:42:04.462358 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557776-ch6qq"] Mar 14 05:42:04 crc kubenswrapper[4817]: I0314 05:42:04.742209 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53bc251-d4cf-4e2c-b731-84eed88c78af" path="/var/lib/kubelet/pods/b53bc251-d4cf-4e2c-b731-84eed88c78af/volumes" Mar 14 05:42:53 crc kubenswrapper[4817]: I0314 05:42:53.416638 4817 scope.go:117] "RemoveContainer" containerID="77a2c3b96e5aac48d866370f2e4fee59c152932f017c1469a80968926ce9398a" Mar 14 05:43:08 crc kubenswrapper[4817]: I0314 05:43:08.565274 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:43:08 crc kubenswrapper[4817]: I0314 05:43:08.567049 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:43:38 crc kubenswrapper[4817]: I0314 05:43:38.930792 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:43:38 crc kubenswrapper[4817]: I0314 05:43:38.931318 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.139581 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557784-lv7vw"] Mar 14 05:44:00 crc kubenswrapper[4817]: E0314 05:44:00.140465 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8db7b14-eae0-483f-b839-00ac2e6fc47d" containerName="oc" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.140479 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8db7b14-eae0-483f-b839-00ac2e6fc47d" containerName="oc" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.140599 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8db7b14-eae0-483f-b839-00ac2e6fc47d" containerName="oc" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.141018 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.142736 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.142953 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.143170 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.144853 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557784-lv7vw"] Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.280273 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bjb\" (UniqueName: \"kubernetes.io/projected/af00ae3a-b371-44a4-80ad-6cf011ca952f-kube-api-access-f7bjb\") pod \"auto-csr-approver-29557784-lv7vw\" (UID: \"af00ae3a-b371-44a4-80ad-6cf011ca952f\") " pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.382095 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7bjb\" (UniqueName: \"kubernetes.io/projected/af00ae3a-b371-44a4-80ad-6cf011ca952f-kube-api-access-f7bjb\") pod \"auto-csr-approver-29557784-lv7vw\" (UID: \"af00ae3a-b371-44a4-80ad-6cf011ca952f\") " pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.403374 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7bjb\" (UniqueName: \"kubernetes.io/projected/af00ae3a-b371-44a4-80ad-6cf011ca952f-kube-api-access-f7bjb\") pod \"auto-csr-approver-29557784-lv7vw\" (UID: \"af00ae3a-b371-44a4-80ad-6cf011ca952f\") " pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.458455 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.648796 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557784-lv7vw"] Mar 14 05:44:00 crc kubenswrapper[4817]: I0314 05:44:00.984327 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" event={"ID":"af00ae3a-b371-44a4-80ad-6cf011ca952f","Type":"ContainerStarted","Data":"fa0201f9da558d862846f3a387596d9c46f39937f14c76b0fa1879cc899d02ec"} Mar 14 05:44:03 crc kubenswrapper[4817]: I0314 05:44:03.001748 4817 generic.go:334] "Generic (PLEG): container finished" podID="af00ae3a-b371-44a4-80ad-6cf011ca952f" containerID="26ee2e61994329679d241da2812a3f6a410ea5053c4861dbc7a26145403a6a22" exitCode=0 Mar 14 05:44:03 crc kubenswrapper[4817]: I0314 05:44:03.001839 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" event={"ID":"af00ae3a-b371-44a4-80ad-6cf011ca952f","Type":"ContainerDied","Data":"26ee2e61994329679d241da2812a3f6a410ea5053c4861dbc7a26145403a6a22"} Mar 14 05:44:04 crc kubenswrapper[4817]: I0314 05:44:04.229453 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:04 crc kubenswrapper[4817]: I0314 05:44:04.340267 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7bjb\" (UniqueName: \"kubernetes.io/projected/af00ae3a-b371-44a4-80ad-6cf011ca952f-kube-api-access-f7bjb\") pod \"af00ae3a-b371-44a4-80ad-6cf011ca952f\" (UID: \"af00ae3a-b371-44a4-80ad-6cf011ca952f\") " Mar 14 05:44:04 crc kubenswrapper[4817]: I0314 05:44:04.345451 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af00ae3a-b371-44a4-80ad-6cf011ca952f-kube-api-access-f7bjb" (OuterVolumeSpecName: "kube-api-access-f7bjb") pod "af00ae3a-b371-44a4-80ad-6cf011ca952f" (UID: "af00ae3a-b371-44a4-80ad-6cf011ca952f"). InnerVolumeSpecName "kube-api-access-f7bjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:44:04 crc kubenswrapper[4817]: I0314 05:44:04.441191 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7bjb\" (UniqueName: \"kubernetes.io/projected/af00ae3a-b371-44a4-80ad-6cf011ca952f-kube-api-access-f7bjb\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:05 crc kubenswrapper[4817]: I0314 05:44:05.021415 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" event={"ID":"af00ae3a-b371-44a4-80ad-6cf011ca952f","Type":"ContainerDied","Data":"fa0201f9da558d862846f3a387596d9c46f39937f14c76b0fa1879cc899d02ec"} Mar 14 05:44:05 crc kubenswrapper[4817]: I0314 05:44:05.021863 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0201f9da558d862846f3a387596d9c46f39937f14c76b0fa1879cc899d02ec" Mar 14 05:44:05 crc kubenswrapper[4817]: I0314 05:44:05.021487 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557784-lv7vw" Mar 14 05:44:05 crc kubenswrapper[4817]: I0314 05:44:05.295391 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557778-7pqht"] Mar 14 05:44:05 crc kubenswrapper[4817]: I0314 05:44:05.298712 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557778-7pqht"] Mar 14 05:44:06 crc kubenswrapper[4817]: I0314 05:44:06.739789 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6044a7d1-0c67-4d5f-91a7-0c856ad34078" path="/var/lib/kubelet/pods/6044a7d1-0c67-4d5f-91a7-0c856ad34078/volumes" Mar 14 05:44:08 crc kubenswrapper[4817]: I0314 05:44:08.566153 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:44:08 crc kubenswrapper[4817]: I0314 05:44:08.566547 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:44:08 crc kubenswrapper[4817]: I0314 05:44:08.566607 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:44:08 crc kubenswrapper[4817]: I0314 05:44:08.567337 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7f1199d020b47f75e517c784668558faa68dcbb94d53dd88ba20907f7920ea4"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:44:08 crc kubenswrapper[4817]: I0314 05:44:08.567412 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://c7f1199d020b47f75e517c784668558faa68dcbb94d53dd88ba20907f7920ea4" gracePeriod=600 Mar 14 05:44:09 crc kubenswrapper[4817]: I0314 05:44:09.043886 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="c7f1199d020b47f75e517c784668558faa68dcbb94d53dd88ba20907f7920ea4" exitCode=0 Mar 14 05:44:09 crc kubenswrapper[4817]: I0314 05:44:09.043935 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"c7f1199d020b47f75e517c784668558faa68dcbb94d53dd88ba20907f7920ea4"} Mar 14 05:44:09 crc kubenswrapper[4817]: I0314 05:44:09.043991 4817 scope.go:117] "RemoveContainer" containerID="7e635f45e2fa41c4eef67a52269f590c74e82163f2975e7047a02129a72dd1f8" Mar 14 05:44:10 crc kubenswrapper[4817]: I0314 05:44:10.051497 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"114f8ca4faca8cf630930433049d8d1045dc89bc450b7ac565ae6c778fa29990"} Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.218706 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf"] Mar 14 05:44:15 crc kubenswrapper[4817]: E0314 05:44:15.221752 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af00ae3a-b371-44a4-80ad-6cf011ca952f" containerName="oc" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.221780 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="af00ae3a-b371-44a4-80ad-6cf011ca952f" containerName="oc" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.221966 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="af00ae3a-b371-44a4-80ad-6cf011ca952f" containerName="oc" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.222408 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.223483 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-wlf2j"] Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.224027 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wlf2j" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.226621 4817 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pzz4m" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.226882 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.226998 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.227250 4817 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rrtl2" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.234175 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf"] Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.237905 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wlf2j"] Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.256537 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-64mg6"] Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.257499 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.259375 4817 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-g6ng4" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.279596 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9zmj\" (UniqueName: \"kubernetes.io/projected/15c7af3c-d545-4e70-b954-29763522ee1f-kube-api-access-p9zmj\") pod \"cert-manager-cainjector-cf98fcc89-xpmdf\" (UID: \"15c7af3c-d545-4e70-b954-29763522ee1f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.279985 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vhq\" (UniqueName: \"kubernetes.io/projected/5791cc48-c47e-41dd-9679-e38124d37511-kube-api-access-62vhq\") pod \"cert-manager-858654f9db-wlf2j\" (UID: \"5791cc48-c47e-41dd-9679-e38124d37511\") " pod="cert-manager/cert-manager-858654f9db-wlf2j" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.279859 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-64mg6"] Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.381711 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62vhq\" (UniqueName: \"kubernetes.io/projected/5791cc48-c47e-41dd-9679-e38124d37511-kube-api-access-62vhq\") pod \"cert-manager-858654f9db-wlf2j\" (UID: \"5791cc48-c47e-41dd-9679-e38124d37511\") " pod="cert-manager/cert-manager-858654f9db-wlf2j" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.382056 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp28g\" (UniqueName: \"kubernetes.io/projected/c9590aff-9888-44dc-ab0c-47959f244b5e-kube-api-access-pp28g\") pod \"cert-manager-webhook-687f57d79b-64mg6\" (UID: \"c9590aff-9888-44dc-ab0c-47959f244b5e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.382279 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9zmj\" (UniqueName: \"kubernetes.io/projected/15c7af3c-d545-4e70-b954-29763522ee1f-kube-api-access-p9zmj\") pod \"cert-manager-cainjector-cf98fcc89-xpmdf\" (UID: \"15c7af3c-d545-4e70-b954-29763522ee1f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.399668 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62vhq\" (UniqueName: \"kubernetes.io/projected/5791cc48-c47e-41dd-9679-e38124d37511-kube-api-access-62vhq\") pod \"cert-manager-858654f9db-wlf2j\" (UID: \"5791cc48-c47e-41dd-9679-e38124d37511\") " pod="cert-manager/cert-manager-858654f9db-wlf2j" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.400519 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9zmj\" (UniqueName: \"kubernetes.io/projected/15c7af3c-d545-4e70-b954-29763522ee1f-kube-api-access-p9zmj\") pod \"cert-manager-cainjector-cf98fcc89-xpmdf\" (UID: \"15c7af3c-d545-4e70-b954-29763522ee1f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.483752 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp28g\" (UniqueName: \"kubernetes.io/projected/c9590aff-9888-44dc-ab0c-47959f244b5e-kube-api-access-pp28g\") pod \"cert-manager-webhook-687f57d79b-64mg6\" (UID: \"c9590aff-9888-44dc-ab0c-47959f244b5e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.501251 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp28g\" (UniqueName: \"kubernetes.io/projected/c9590aff-9888-44dc-ab0c-47959f244b5e-kube-api-access-pp28g\") pod \"cert-manager-webhook-687f57d79b-64mg6\" (UID: \"c9590aff-9888-44dc-ab0c-47959f244b5e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.544189 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.549006 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wlf2j" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.571666 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:15 crc kubenswrapper[4817]: I0314 05:44:15.760539 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf"] Mar 14 05:44:16 crc kubenswrapper[4817]: I0314 05:44:16.014441 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wlf2j"] Mar 14 05:44:16 crc kubenswrapper[4817]: W0314 05:44:16.019205 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5791cc48_c47e_41dd_9679_e38124d37511.slice/crio-c5184b6bc383f06f1f6c6f725d64ec1b24d6a05ef38fe5e5b91138e88c01ec5b WatchSource:0}: Error finding container c5184b6bc383f06f1f6c6f725d64ec1b24d6a05ef38fe5e5b91138e88c01ec5b: Status 404 returned error can't find the container with id c5184b6bc383f06f1f6c6f725d64ec1b24d6a05ef38fe5e5b91138e88c01ec5b Mar 14 05:44:16 crc kubenswrapper[4817]: I0314 05:44:16.029422 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-64mg6"] Mar 14 05:44:16 crc kubenswrapper[4817]: W0314 05:44:16.034715 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9590aff_9888_44dc_ab0c_47959f244b5e.slice/crio-1c05556e13e773a71937af5e35189baa8675d7a319ad8a8c5939c50ce8171edb WatchSource:0}: Error finding container 1c05556e13e773a71937af5e35189baa8675d7a319ad8a8c5939c50ce8171edb: Status 404 returned error can't find the container with id 1c05556e13e773a71937af5e35189baa8675d7a319ad8a8c5939c50ce8171edb Mar 14 05:44:16 crc kubenswrapper[4817]: I0314 05:44:16.090497 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wlf2j" event={"ID":"5791cc48-c47e-41dd-9679-e38124d37511","Type":"ContainerStarted","Data":"c5184b6bc383f06f1f6c6f725d64ec1b24d6a05ef38fe5e5b91138e88c01ec5b"} Mar 14 05:44:16 crc kubenswrapper[4817]: I0314 05:44:16.091920 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" event={"ID":"15c7af3c-d545-4e70-b954-29763522ee1f","Type":"ContainerStarted","Data":"05c6c152b8680d006fcd1f3128cb6d64e21feb48e63588a19fce91af7af09c41"} Mar 14 05:44:17 crc kubenswrapper[4817]: I0314 05:44:16.092849 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" event={"ID":"c9590aff-9888-44dc-ab0c-47959f244b5e","Type":"ContainerStarted","Data":"1c05556e13e773a71937af5e35189baa8675d7a319ad8a8c5939c50ce8171edb"} Mar 14 05:44:20 crc kubenswrapper[4817]: I0314 05:44:20.418022 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" event={"ID":"15c7af3c-d545-4e70-b954-29763522ee1f","Type":"ContainerStarted","Data":"cf86aec8e12c4c1d39f587b1ff08bcc35ccbb94a9e7303440e81f66512926d49"} Mar 14 05:44:20 crc kubenswrapper[4817]: I0314 05:44:20.436644 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xpmdf" podStartSLOduration=1.055978827 podStartE2EDuration="5.436627558s" podCreationTimestamp="2026-03-14 05:44:15 +0000 UTC" firstStartedPulling="2026-03-14 05:44:15.774600283 +0000 UTC m=+709.812861029" lastFinishedPulling="2026-03-14 05:44:20.155249014 +0000 UTC m=+714.193509760" observedRunningTime="2026-03-14 05:44:20.434162698 +0000 UTC m=+714.472423444" watchObservedRunningTime="2026-03-14 05:44:20.436627558 +0000 UTC m=+714.474888304" Mar 14 05:44:24 crc kubenswrapper[4817]: I0314 05:44:24.443262 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" event={"ID":"c9590aff-9888-44dc-ab0c-47959f244b5e","Type":"ContainerStarted","Data":"d5ca7091f6a351bd9f944198896c1cac5de35e9fd5952adeec1741996c4c9a09"} Mar 14 05:44:24 crc kubenswrapper[4817]: I0314 05:44:24.443676 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:24 crc kubenswrapper[4817]: I0314 05:44:24.458524 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" podStartSLOduration=1.807228888 podStartE2EDuration="9.458506551s" podCreationTimestamp="2026-03-14 05:44:15 +0000 UTC" firstStartedPulling="2026-03-14 05:44:16.037009295 +0000 UTC m=+710.075270041" lastFinishedPulling="2026-03-14 05:44:23.688286918 +0000 UTC m=+717.726547704" observedRunningTime="2026-03-14 05:44:24.455572897 +0000 UTC m=+718.493833663" watchObservedRunningTime="2026-03-14 05:44:24.458506551 +0000 UTC m=+718.496767307" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.449804 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wlf2j" event={"ID":"5791cc48-c47e-41dd-9679-e38124d37511","Type":"ContainerStarted","Data":"aa67caac6d2f82aa673865d4105830c1ae9f521c3ab2ad407d5694fcae5575a2"} Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.472644 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tntn6"] Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473146 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-controller" containerID="cri-o://b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473193 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="northd" containerID="cri-o://bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473222 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-node" containerID="cri-o://290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473280 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="nbdb" containerID="cri-o://46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473212 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="sbdb" containerID="cri-o://1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473317 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.473372 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-acl-logging" containerID="cri-o://b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.534080 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovnkube-controller" containerID="cri-o://23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" gracePeriod=30 Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.828705 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tntn6_dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa/ovn-acl-logging/0.log" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.829256 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tntn6_dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa/ovn-controller/0.log" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.829717 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.861513 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-wlf2j" podStartSLOduration=2.442686748 podStartE2EDuration="10.861495737s" podCreationTimestamp="2026-03-14 05:44:15 +0000 UTC" firstStartedPulling="2026-03-14 05:44:16.020763632 +0000 UTC m=+710.059024378" lastFinishedPulling="2026-03-14 05:44:24.439572621 +0000 UTC m=+718.477833367" observedRunningTime="2026-03-14 05:44:25.490766935 +0000 UTC m=+719.529027721" watchObservedRunningTime="2026-03-14 05:44:25.861495737 +0000 UTC m=+719.899756483" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885223 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8xjjq"] Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885438 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kubecfg-setup" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885454 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kubecfg-setup" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885464 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-controller" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885471 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-controller" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885477 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-node" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885483 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-node" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885489 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="nbdb" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885494 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="nbdb" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885503 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885508 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885517 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="northd" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885523 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="northd" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885532 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovnkube-controller" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885537 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovnkube-controller" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885544 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-acl-logging" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885551 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-acl-logging" Mar 14 05:44:25 crc kubenswrapper[4817]: E0314 05:44:25.885559 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="sbdb" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885565 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="sbdb" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885647 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="sbdb" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885655 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="nbdb" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885667 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-acl-logging" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885673 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-node" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885680 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="northd" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885690 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovn-controller" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885698 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.885705 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerName="ovnkube-controller" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.887477 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.990958 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-ovn-kubernetes\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991243 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-kubelet\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991275 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-env-overrides\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991043 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991300 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-netns\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991322 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991344 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7fm\" (UniqueName: \"kubernetes.io/projected/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-kube-api-access-xh7fm\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991347 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991366 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-node-log\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991384 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991403 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-ovn\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991425 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-systemd-units\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991455 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-config\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991466 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991483 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovn-node-metrics-cert\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991488 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991517 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-slash\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991506 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991541 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-openvswitch\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991545 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-node-log" (OuterVolumeSpecName: "node-log") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991564 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-bin\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991576 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-slash" (OuterVolumeSpecName: "host-slash") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991591 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-script-lib\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991606 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991619 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-var-lib-openvswitch\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991622 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991650 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-systemd\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991667 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991689 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-log-socket\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991780 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-etc-openvswitch\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991799 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-netd\") pod \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\" (UID: \"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa\") " Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991961 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-log-socket" (OuterVolumeSpecName: "log-socket") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991979 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-cni-bin\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991990 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992021 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.991996 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992019 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992060 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992061 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992336 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovnkube-config\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992399 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovn-node-metrics-cert\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992418 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992428 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-env-overrides\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992520 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-slash\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992576 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9698\" (UniqueName: \"kubernetes.io/projected/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-kube-api-access-k9698\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992603 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-node-log\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992630 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-run-ovn-kubernetes\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992677 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-systemd\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992716 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-kubelet\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992759 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-systemd-units\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992839 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovnkube-script-lib\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992871 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-cni-netd\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992940 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-run-netns\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.992964 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-etc-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993017 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-var-lib-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993050 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-log-socket\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993097 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-ovn\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993165 4817 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-log-socket\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993185 4817 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993197 4817 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993208 4817 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993241 4817 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993253 4817 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993263 4817 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993275 4817 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993284 4817 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-node-log\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993293 4817 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993322 4817 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993330 4817 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993338 4817 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993346 4817 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-slash\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993354 4817 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993363 4817 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.993370 4817 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.996353 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:44:25 crc kubenswrapper[4817]: I0314 05:44:25.996447 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-kube-api-access-xh7fm" (OuterVolumeSpecName: "kube-api-access-xh7fm") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "kube-api-access-xh7fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.003753 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" (UID: "dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.093860 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-systemd\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.093948 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-run-ovn-kubernetes\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.093968 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-kubelet\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.093988 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-systemd-units\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094005 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovnkube-script-lib\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094025 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-cni-netd\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094042 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-run-netns\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094039 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-run-ovn-kubernetes\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094039 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-systemd\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094126 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-cni-netd\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094096 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-etc-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094124 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-run-netns\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094060 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-etc-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094182 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-systemd-units\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094193 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-kubelet\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094223 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-var-lib-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094202 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-var-lib-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094310 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-log-socket\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094330 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-ovn\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094358 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-cni-bin\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094391 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094416 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094433 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovnkube-config\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094520 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovn-node-metrics-cert\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094551 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-env-overrides\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094571 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-slash\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094600 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9698\" (UniqueName: \"kubernetes.io/projected/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-kube-api-access-k9698\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094621 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-node-log\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094671 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7fm\" (UniqueName: \"kubernetes.io/projected/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-kube-api-access-xh7fm\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094682 4817 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094694 4817 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094723 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-node-log\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094746 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-log-socket\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094764 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-ovn\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094768 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovnkube-script-lib\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.094783 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-cni-bin\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.095227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-slash\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.095196 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.095298 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-env-overrides\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.095357 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-run-openvswitch\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.095687 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovnkube-config\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.097673 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-ovn-node-metrics-cert\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.112004 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9698\" (UniqueName: \"kubernetes.io/projected/6eaa38b6-a0e1-4e2c-93f4-fb67054a9347-kube-api-access-k9698\") pod \"ovnkube-node-8xjjq\" (UID: \"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347\") " pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.202555 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:26 crc kubenswrapper[4817]: W0314 05:44:26.232324 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eaa38b6_a0e1_4e2c_93f4_fb67054a9347.slice/crio-17299fea08ef4259492f8765d119c2037362af0d9165f00b62fa5c420676862a WatchSource:0}: Error finding container 17299fea08ef4259492f8765d119c2037362af0d9165f00b62fa5c420676862a: Status 404 returned error can't find the container with id 17299fea08ef4259492f8765d119c2037362af0d9165f00b62fa5c420676862a Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.459687 4817 generic.go:334] "Generic (PLEG): container finished" podID="6eaa38b6-a0e1-4e2c-93f4-fb67054a9347" containerID="86fbff96e4613103ecbf8f952590fa070e59bbe5b767c59bb9816bd1ad32f5b6" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.459760 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerDied","Data":"86fbff96e4613103ecbf8f952590fa070e59bbe5b767c59bb9816bd1ad32f5b6"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.459817 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"17299fea08ef4259492f8765d119c2037362af0d9165f00b62fa5c420676862a"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.462299 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdf7p_217c6f57-e799-4243-86ea-5b76c95c95ec/kube-multus/0.log" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.462332 4817 generic.go:334] "Generic (PLEG): container finished" podID="217c6f57-e799-4243-86ea-5b76c95c95ec" containerID="5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292" exitCode=2 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.462385 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdf7p" event={"ID":"217c6f57-e799-4243-86ea-5b76c95c95ec","Type":"ContainerDied","Data":"5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.462800 4817 scope.go:117] "RemoveContainer" containerID="5d73fdcf80b587cc8c53fb4fb9de8033d4e39633be358f192e6970df95a1d292" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.472803 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tntn6_dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa/ovn-acl-logging/0.log" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.473424 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tntn6_dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa/ovn-controller/0.log" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474012 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474059 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474071 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474082 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474091 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474100 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" exitCode=0 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474110 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" exitCode=143 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474140 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" exitCode=143 Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474185 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474054 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474251 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474268 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474277 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474287 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474298 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474309 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474318 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474323 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474331 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474340 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474347 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474353 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474359 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474364 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474369 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474375 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474381 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474387 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474395 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474402 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474408 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474413 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474417 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474422 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474427 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474432 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474449 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474454 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474461 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tntn6" event={"ID":"dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa","Type":"ContainerDied","Data":"9596a2376847a524fe549f6ffcfc0cf1fdebf07c44636e372d786b2dcce684f1"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474468 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474474 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474479 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474494 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474500 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474505 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474509 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474514 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474520 4817 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.474534 4817 scope.go:117] "RemoveContainer" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.514500 4817 scope.go:117] "RemoveContainer" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.537447 4817 scope.go:117] "RemoveContainer" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.555082 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tntn6"] Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.559637 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tntn6"] Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.569525 4817 scope.go:117] "RemoveContainer" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.588793 4817 scope.go:117] "RemoveContainer" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.632468 4817 scope.go:117] "RemoveContainer" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.646147 4817 scope.go:117] "RemoveContainer" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.660770 4817 scope.go:117] "RemoveContainer" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.677936 4817 scope.go:117] "RemoveContainer" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.693166 4817 scope.go:117] "RemoveContainer" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.693565 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": container with ID starting with 23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a not found: ID does not exist" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.693611 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} err="failed to get container status \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": rpc error: code = NotFound desc = could not find container \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": container with ID starting with 23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.693639 4817 scope.go:117] "RemoveContainer" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.693964 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": container with ID starting with 1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297 not found: ID does not exist" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.693994 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} err="failed to get container status \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": rpc error: code = NotFound desc = could not find container \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": container with ID starting with 1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.694011 4817 scope.go:117] "RemoveContainer" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.694350 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": container with ID starting with 46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5 not found: ID does not exist" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.694377 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} err="failed to get container status \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": rpc error: code = NotFound desc = could not find container \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": container with ID starting with 46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.694394 4817 scope.go:117] "RemoveContainer" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.694749 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": container with ID starting with bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5 not found: ID does not exist" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.694778 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} err="failed to get container status \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": rpc error: code = NotFound desc = could not find container \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": container with ID starting with bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.694794 4817 scope.go:117] "RemoveContainer" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.695255 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": container with ID starting with d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0 not found: ID does not exist" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.695279 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} err="failed to get container status \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": rpc error: code = NotFound desc = could not find container \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": container with ID starting with d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.695297 4817 scope.go:117] "RemoveContainer" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.695541 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": container with ID starting with 290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f not found: ID does not exist" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.695569 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} err="failed to get container status \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": rpc error: code = NotFound desc = could not find container \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": container with ID starting with 290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.695586 4817 scope.go:117] "RemoveContainer" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.695919 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": container with ID starting with b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121 not found: ID does not exist" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.695952 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} err="failed to get container status \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": rpc error: code = NotFound desc = could not find container \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": container with ID starting with b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.695970 4817 scope.go:117] "RemoveContainer" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.696386 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": container with ID starting with b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896 not found: ID does not exist" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.696430 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} err="failed to get container status \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": rpc error: code = NotFound desc = could not find container \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": container with ID starting with b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.696484 4817 scope.go:117] "RemoveContainer" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" Mar 14 05:44:26 crc kubenswrapper[4817]: E0314 05:44:26.696761 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": container with ID starting with ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f not found: ID does not exist" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.696784 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} err="failed to get container status \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": rpc error: code = NotFound desc = could not find container \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": container with ID starting with ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.696797 4817 scope.go:117] "RemoveContainer" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.697157 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} err="failed to get container status \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": rpc error: code = NotFound desc = could not find container \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": container with ID starting with 23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.697183 4817 scope.go:117] "RemoveContainer" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.697597 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} err="failed to get container status \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": rpc error: code = NotFound desc = could not find container \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": container with ID starting with 1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.697620 4817 scope.go:117] "RemoveContainer" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.697840 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} err="failed to get container status \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": rpc error: code = NotFound desc = could not find container \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": container with ID starting with 46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.697860 4817 scope.go:117] "RemoveContainer" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.698383 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} err="failed to get container status \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": rpc error: code = NotFound desc = could not find container \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": container with ID starting with bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.698405 4817 scope.go:117] "RemoveContainer" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.698691 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} err="failed to get container status \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": rpc error: code = NotFound desc = could not find container \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": container with ID starting with d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.698711 4817 scope.go:117] "RemoveContainer" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.698938 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} err="failed to get container status \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": rpc error: code = NotFound desc = could not find container \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": container with ID starting with 290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.698958 4817 scope.go:117] "RemoveContainer" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.699330 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} err="failed to get container status \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": rpc error: code = NotFound desc = could not find container \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": container with ID starting with b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.699352 4817 scope.go:117] "RemoveContainer" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.699862 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} err="failed to get container status \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": rpc error: code = NotFound desc = could not find container \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": container with ID starting with b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.699886 4817 scope.go:117] "RemoveContainer" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.700543 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} err="failed to get container status \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": rpc error: code = NotFound desc = could not find container \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": container with ID starting with ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.700562 4817 scope.go:117] "RemoveContainer" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.700824 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} err="failed to get container status \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": rpc error: code = NotFound desc = could not find container \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": container with ID starting with 23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.700856 4817 scope.go:117] "RemoveContainer" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.701269 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} err="failed to get container status \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": rpc error: code = NotFound desc = could not find container \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": container with ID starting with 1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.701287 4817 scope.go:117] "RemoveContainer" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.701870 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} err="failed to get container status \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": rpc error: code = NotFound desc = could not find container \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": container with ID starting with 46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.701984 4817 scope.go:117] "RemoveContainer" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.702481 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} err="failed to get container status \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": rpc error: code = NotFound desc = could not find container \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": container with ID starting with bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.702548 4817 scope.go:117] "RemoveContainer" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.703232 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} err="failed to get container status \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": rpc error: code = NotFound desc = could not find container \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": container with ID starting with d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.703268 4817 scope.go:117] "RemoveContainer" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.703624 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} err="failed to get container status \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": rpc error: code = NotFound desc = could not find container \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": container with ID starting with 290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.703650 4817 scope.go:117] "RemoveContainer" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.704042 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} err="failed to get container status \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": rpc error: code = NotFound desc = could not find container \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": container with ID starting with b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.704067 4817 scope.go:117] "RemoveContainer" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.704335 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} err="failed to get container status \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": rpc error: code = NotFound desc = could not find container \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": container with ID starting with b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.704361 4817 scope.go:117] "RemoveContainer" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.704645 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} err="failed to get container status \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": rpc error: code = NotFound desc = could not find container \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": container with ID starting with ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.704670 4817 scope.go:117] "RemoveContainer" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705007 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} err="failed to get container status \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": rpc error: code = NotFound desc = could not find container \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": container with ID starting with 23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705056 4817 scope.go:117] "RemoveContainer" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705395 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} err="failed to get container status \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": rpc error: code = NotFound desc = could not find container \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": container with ID starting with 1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705423 4817 scope.go:117] "RemoveContainer" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705670 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} err="failed to get container status \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": rpc error: code = NotFound desc = could not find container \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": container with ID starting with 46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705693 4817 scope.go:117] "RemoveContainer" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.705993 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} err="failed to get container status \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": rpc error: code = NotFound desc = could not find container \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": container with ID starting with bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706032 4817 scope.go:117] "RemoveContainer" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706297 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} err="failed to get container status \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": rpc error: code = NotFound desc = could not find container \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": container with ID starting with d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706323 4817 scope.go:117] "RemoveContainer" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706571 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} err="failed to get container status \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": rpc error: code = NotFound desc = could not find container \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": container with ID starting with 290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706597 4817 scope.go:117] "RemoveContainer" containerID="b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706839 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121"} err="failed to get container status \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": rpc error: code = NotFound desc = could not find container \"b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121\": container with ID starting with b7fb1b13b1923615d6f19ec781fd1116075fb55c6b2c3a15be76c1604f473121 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.706863 4817 scope.go:117] "RemoveContainer" containerID="b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.707132 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896"} err="failed to get container status \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": rpc error: code = NotFound desc = could not find container \"b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896\": container with ID starting with b06882d584c52bfc6f8e760385dc29077b3cc9d79908e2065437dfe3ec1cb896 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.707160 4817 scope.go:117] "RemoveContainer" containerID="ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.707450 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f"} err="failed to get container status \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": rpc error: code = NotFound desc = could not find container \"ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f\": container with ID starting with ef973fb0dfbc8433b4ee8ca1b51e8e6167f369433eae99adeec5fcab6ef7386f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.707470 4817 scope.go:117] "RemoveContainer" containerID="23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.707876 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a"} err="failed to get container status \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": rpc error: code = NotFound desc = could not find container \"23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a\": container with ID starting with 23bc8d42162a39b127cebd1a70f8fa185d3765c0f9bdcff8beb7151e0347069a not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.707970 4817 scope.go:117] "RemoveContainer" containerID="1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.708349 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297"} err="failed to get container status \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": rpc error: code = NotFound desc = could not find container \"1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297\": container with ID starting with 1e192ab10bc5bce7883172b9976d95098413a3f91df9892b8073b692066f3297 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.708380 4817 scope.go:117] "RemoveContainer" containerID="46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.708750 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5"} err="failed to get container status \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": rpc error: code = NotFound desc = could not find container \"46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5\": container with ID starting with 46a17e2c6dc7e998d11752046007a60d7a66f83cd287382991ca0322349984a5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.708775 4817 scope.go:117] "RemoveContainer" containerID="bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.709040 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5"} err="failed to get container status \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": rpc error: code = NotFound desc = could not find container \"bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5\": container with ID starting with bc515e60814590c6bb301c949ada3428ee2504e4c49fff82d4f73bf2c15a49f5 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.709064 4817 scope.go:117] "RemoveContainer" containerID="d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.709393 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0"} err="failed to get container status \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": rpc error: code = NotFound desc = could not find container \"d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0\": container with ID starting with d41d24831656075845a51174ad45d713950f33bddd68cd5e5f6ae122c1c4bcb0 not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.709414 4817 scope.go:117] "RemoveContainer" containerID="290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.709928 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f"} err="failed to get container status \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": rpc error: code = NotFound desc = could not find container \"290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f\": container with ID starting with 290bf32f15cc5e4cb963ef1ff03d2bc9f09de51486747e0052704dd54b2a465f not found: ID does not exist" Mar 14 05:44:26 crc kubenswrapper[4817]: I0314 05:44:26.741029 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa" path="/var/lib/kubelet/pods/dd2a2f95-23c1-4605-b9b2-178f7ef2a7aa/volumes" Mar 14 05:44:27 crc kubenswrapper[4817]: I0314 05:44:27.483707 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"6fdafa887d857636b71f3d1ba2e4f928ffa6392557280f8e5ecfebdb256305ea"} Mar 14 05:44:27 crc kubenswrapper[4817]: I0314 05:44:27.484153 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"f7f704bb339489b81d01bef67bbdaba467f53564ded1a64bd544c6f2419da8c5"} Mar 14 05:44:27 crc kubenswrapper[4817]: I0314 05:44:27.484170 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"f182a8d09167a8f86ae8aedce8d327466aa99a5538e6cda0120b7b091948e6eb"} Mar 14 05:44:27 crc kubenswrapper[4817]: I0314 05:44:27.486244 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdf7p_217c6f57-e799-4243-86ea-5b76c95c95ec/kube-multus/0.log" Mar 14 05:44:27 crc kubenswrapper[4817]: I0314 05:44:27.486291 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdf7p" event={"ID":"217c6f57-e799-4243-86ea-5b76c95c95ec","Type":"ContainerStarted","Data":"304298c3938b7a07557ba489cb5fae0855f41f71aebe748c0eab9a787458a8d4"} Mar 14 05:44:28 crc kubenswrapper[4817]: I0314 05:44:28.498862 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"b332880b1062f3902acc0ba2f00bfa3ff1f60822e76272c9ca9237d169262e3c"} Mar 14 05:44:28 crc kubenswrapper[4817]: I0314 05:44:28.499244 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"a843c3b9cc5afd04c066f496b1c6c442fec7999644fb4d04d5dac197b84bc2a6"} Mar 14 05:44:28 crc kubenswrapper[4817]: I0314 05:44:28.499259 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"1144e66181e73805c9682bf28be8bdc8bbf98a757871c7a39185ca18b1027118"} Mar 14 05:44:30 crc kubenswrapper[4817]: I0314 05:44:30.574984 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-64mg6" Mar 14 05:44:31 crc kubenswrapper[4817]: I0314 05:44:31.522715 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"f5fb31c959314a8d5d34b441ca070fc9c8284f02896e544781bfb3901a1f4c5c"} Mar 14 05:44:32 crc kubenswrapper[4817]: I0314 05:44:32.540098 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" event={"ID":"6eaa38b6-a0e1-4e2c-93f4-fb67054a9347","Type":"ContainerStarted","Data":"8462387adc3d6b96c5dc1200fc5cb255c5bb3871005b9f9f9e14a1fbc6b65491"} Mar 14 05:44:32 crc kubenswrapper[4817]: I0314 05:44:32.540447 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:32 crc kubenswrapper[4817]: I0314 05:44:32.540493 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:32 crc kubenswrapper[4817]: I0314 05:44:32.577489 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" podStartSLOduration=7.577465649 podStartE2EDuration="7.577465649s" podCreationTimestamp="2026-03-14 05:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:44:32.577183481 +0000 UTC m=+726.615444247" watchObservedRunningTime="2026-03-14 05:44:32.577465649 +0000 UTC m=+726.615726425" Mar 14 05:44:32 crc kubenswrapper[4817]: I0314 05:44:32.610928 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:33 crc kubenswrapper[4817]: I0314 05:44:33.545751 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:33 crc kubenswrapper[4817]: I0314 05:44:33.579107 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:44:53 crc kubenswrapper[4817]: I0314 05:44:53.957103 4817 scope.go:117] "RemoveContainer" containerID="82a1b183acf038e01ad735f258effb5e2efac59e3b048f6b5fcfcd879d0f9a26" Mar 14 05:44:56 crc kubenswrapper[4817]: I0314 05:44:56.226755 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8xjjq" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.130654 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m"] Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.131632 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.134534 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.134632 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.147708 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m"] Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.281354 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bf5w\" (UniqueName: \"kubernetes.io/projected/f550d539-a9ab-4994-9f99-132d0bfaca8e-kube-api-access-4bf5w\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.281463 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f550d539-a9ab-4994-9f99-132d0bfaca8e-secret-volume\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.281555 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f550d539-a9ab-4994-9f99-132d0bfaca8e-config-volume\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.383217 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f550d539-a9ab-4994-9f99-132d0bfaca8e-secret-volume\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.383299 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f550d539-a9ab-4994-9f99-132d0bfaca8e-config-volume\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.383324 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bf5w\" (UniqueName: \"kubernetes.io/projected/f550d539-a9ab-4994-9f99-132d0bfaca8e-kube-api-access-4bf5w\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.384190 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f550d539-a9ab-4994-9f99-132d0bfaca8e-config-volume\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.389342 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f550d539-a9ab-4994-9f99-132d0bfaca8e-secret-volume\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.404201 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bf5w\" (UniqueName: \"kubernetes.io/projected/f550d539-a9ab-4994-9f99-132d0bfaca8e-kube-api-access-4bf5w\") pod \"collect-profiles-29557785-ssp4m\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.447682 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.627226 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m"] Mar 14 05:45:00 crc kubenswrapper[4817]: I0314 05:45:00.726538 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" event={"ID":"f550d539-a9ab-4994-9f99-132d0bfaca8e","Type":"ContainerStarted","Data":"c09b2c6eeb75927ce2a1732bf2b0739b995471ef30a4d582d27641cb731ea189"} Mar 14 05:45:01 crc kubenswrapper[4817]: I0314 05:45:01.732623 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" event={"ID":"f550d539-a9ab-4994-9f99-132d0bfaca8e","Type":"ContainerStarted","Data":"d25483dbf9a40a992108387e23619c0ab03f48446dacc9fd4760cd3bae13a867"} Mar 14 05:45:02 crc kubenswrapper[4817]: I0314 05:45:02.740591 4817 generic.go:334] "Generic (PLEG): container finished" podID="f550d539-a9ab-4994-9f99-132d0bfaca8e" containerID="d25483dbf9a40a992108387e23619c0ab03f48446dacc9fd4760cd3bae13a867" exitCode=0 Mar 14 05:45:02 crc kubenswrapper[4817]: I0314 05:45:02.744693 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" event={"ID":"f550d539-a9ab-4994-9f99-132d0bfaca8e","Type":"ContainerDied","Data":"d25483dbf9a40a992108387e23619c0ab03f48446dacc9fd4760cd3bae13a867"} Mar 14 05:45:03 crc kubenswrapper[4817]: I0314 05:45:03.970653 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.138611 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f550d539-a9ab-4994-9f99-132d0bfaca8e-config-volume\") pod \"f550d539-a9ab-4994-9f99-132d0bfaca8e\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.138696 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bf5w\" (UniqueName: \"kubernetes.io/projected/f550d539-a9ab-4994-9f99-132d0bfaca8e-kube-api-access-4bf5w\") pod \"f550d539-a9ab-4994-9f99-132d0bfaca8e\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.138731 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f550d539-a9ab-4994-9f99-132d0bfaca8e-secret-volume\") pod \"f550d539-a9ab-4994-9f99-132d0bfaca8e\" (UID: \"f550d539-a9ab-4994-9f99-132d0bfaca8e\") " Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.139585 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f550d539-a9ab-4994-9f99-132d0bfaca8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f550d539-a9ab-4994-9f99-132d0bfaca8e" (UID: "f550d539-a9ab-4994-9f99-132d0bfaca8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.147440 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f550d539-a9ab-4994-9f99-132d0bfaca8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f550d539-a9ab-4994-9f99-132d0bfaca8e" (UID: "f550d539-a9ab-4994-9f99-132d0bfaca8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.147469 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f550d539-a9ab-4994-9f99-132d0bfaca8e-kube-api-access-4bf5w" (OuterVolumeSpecName: "kube-api-access-4bf5w") pod "f550d539-a9ab-4994-9f99-132d0bfaca8e" (UID: "f550d539-a9ab-4994-9f99-132d0bfaca8e"). InnerVolumeSpecName "kube-api-access-4bf5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.242436 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f550d539-a9ab-4994-9f99-132d0bfaca8e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.242491 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bf5w\" (UniqueName: \"kubernetes.io/projected/f550d539-a9ab-4994-9f99-132d0bfaca8e-kube-api-access-4bf5w\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.242518 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f550d539-a9ab-4994-9f99-132d0bfaca8e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.759531 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" event={"ID":"f550d539-a9ab-4994-9f99-132d0bfaca8e","Type":"ContainerDied","Data":"c09b2c6eeb75927ce2a1732bf2b0739b995471ef30a4d582d27641cb731ea189"} Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.759577 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09b2c6eeb75927ce2a1732bf2b0739b995471ef30a4d582d27641cb731ea189" Mar 14 05:45:04 crc kubenswrapper[4817]: I0314 05:45:04.759595 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m" Mar 14 05:45:13 crc kubenswrapper[4817]: I0314 05:45:13.239795 4817 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 05:45:21 crc kubenswrapper[4817]: I0314 05:45:21.624958 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-l8vr9" podUID="57c287ae-7267-4b96-b901-70a4171a6747" containerName="registry-server" probeResult="failure" output=< Mar 14 05:45:21 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:45:21 crc kubenswrapper[4817]: > Mar 14 05:45:21 crc kubenswrapper[4817]: I0314 05:45:21.625353 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-l8vr9" podUID="57c287ae-7267-4b96-b901-70a4171a6747" containerName="registry-server" probeResult="failure" output=< Mar 14 05:45:21 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:45:21 crc kubenswrapper[4817]: > Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.298870 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk"] Mar 14 05:45:22 crc kubenswrapper[4817]: E0314 05:45:22.299077 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f550d539-a9ab-4994-9f99-132d0bfaca8e" containerName="collect-profiles" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.299089 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f550d539-a9ab-4994-9f99-132d0bfaca8e" containerName="collect-profiles" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.299190 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f550d539-a9ab-4994-9f99-132d0bfaca8e" containerName="collect-profiles" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.299860 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.301832 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.315959 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk"] Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.394697 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlmh\" (UniqueName: \"kubernetes.io/projected/ee827efa-cd10-4520-928c-42dbfd6ab1ef-kube-api-access-9xlmh\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.394757 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.394802 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.495615 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlmh\" (UniqueName: \"kubernetes.io/projected/ee827efa-cd10-4520-928c-42dbfd6ab1ef-kube-api-access-9xlmh\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.495667 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.495694 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.496162 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.496270 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.512173 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlmh\" (UniqueName: \"kubernetes.io/projected/ee827efa-cd10-4520-928c-42dbfd6ab1ef-kube-api-access-9xlmh\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:22 crc kubenswrapper[4817]: I0314 05:45:22.616415 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:23 crc kubenswrapper[4817]: I0314 05:45:23.171309 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk"] Mar 14 05:45:23 crc kubenswrapper[4817]: I0314 05:45:23.950250 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" event={"ID":"ee827efa-cd10-4520-928c-42dbfd6ab1ef","Type":"ContainerStarted","Data":"79d39d215d979e3c84b21727cf107bc97482f2b919cf5dbd06b4b67c1ecd968b"} Mar 14 05:45:23 crc kubenswrapper[4817]: I0314 05:45:23.950558 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" event={"ID":"ee827efa-cd10-4520-928c-42dbfd6ab1ef","Type":"ContainerStarted","Data":"5fb7079d3157b3031246a9987a9379c04a54bd42f4fead526ba3b02266ca9efc"} Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.040307 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wklnt"] Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.041558 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.051271 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wklnt"] Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.423710 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-catalog-content\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.423790 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-utilities\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.423811 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjjs\" (UniqueName: \"kubernetes.io/projected/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-kube-api-access-5xjjs\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.524816 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjjs\" (UniqueName: \"kubernetes.io/projected/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-kube-api-access-5xjjs\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.524984 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-catalog-content\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.525030 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-utilities\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.525554 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-catalog-content\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.525584 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-utilities\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.545427 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjjs\" (UniqueName: \"kubernetes.io/projected/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-kube-api-access-5xjjs\") pod \"redhat-operators-wklnt\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.642457 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.843060 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wklnt"] Mar 14 05:45:24 crc kubenswrapper[4817]: W0314 05:45:24.873839 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d9dc3da_3b17_4bcb_adc8_3e0a2bb9a008.slice/crio-45986c850eb0334aa46611c9acf7cdff4e7c289f1323138929d4246296606ad0 WatchSource:0}: Error finding container 45986c850eb0334aa46611c9acf7cdff4e7c289f1323138929d4246296606ad0: Status 404 returned error can't find the container with id 45986c850eb0334aa46611c9acf7cdff4e7c289f1323138929d4246296606ad0 Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.956532 4817 generic.go:334] "Generic (PLEG): container finished" podID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerID="79d39d215d979e3c84b21727cf107bc97482f2b919cf5dbd06b4b67c1ecd968b" exitCode=0 Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.956608 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" event={"ID":"ee827efa-cd10-4520-928c-42dbfd6ab1ef","Type":"ContainerDied","Data":"79d39d215d979e3c84b21727cf107bc97482f2b919cf5dbd06b4b67c1ecd968b"} Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.957479 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerStarted","Data":"45986c850eb0334aa46611c9acf7cdff4e7c289f1323138929d4246296606ad0"} Mar 14 05:45:24 crc kubenswrapper[4817]: I0314 05:45:24.957749 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 05:45:25 crc kubenswrapper[4817]: I0314 05:45:25.966499 4817 generic.go:334] "Generic (PLEG): container finished" podID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerID="552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad" exitCode=0 Mar 14 05:45:25 crc kubenswrapper[4817]: I0314 05:45:25.966614 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerDied","Data":"552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad"} Mar 14 05:45:26 crc kubenswrapper[4817]: I0314 05:45:26.975794 4817 generic.go:334] "Generic (PLEG): container finished" podID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerID="ecdcdd3ce9f69f8bf2c0847e3df56b9c8ce9fc5660603c917bde1b4b60b71386" exitCode=0 Mar 14 05:45:26 crc kubenswrapper[4817]: I0314 05:45:26.975981 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" event={"ID":"ee827efa-cd10-4520-928c-42dbfd6ab1ef","Type":"ContainerDied","Data":"ecdcdd3ce9f69f8bf2c0847e3df56b9c8ce9fc5660603c917bde1b4b60b71386"} Mar 14 05:45:26 crc kubenswrapper[4817]: I0314 05:45:26.979699 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerStarted","Data":"69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e"} Mar 14 05:45:27 crc kubenswrapper[4817]: I0314 05:45:27.985531 4817 generic.go:334] "Generic (PLEG): container finished" podID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerID="69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e" exitCode=0 Mar 14 05:45:27 crc kubenswrapper[4817]: I0314 05:45:27.985636 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerDied","Data":"69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e"} Mar 14 05:45:27 crc kubenswrapper[4817]: I0314 05:45:27.988626 4817 generic.go:334] "Generic (PLEG): container finished" podID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerID="7e170a008f763f5dd2ca50d1060ddfd730aaf3dff50595e3a567d7d7652da50f" exitCode=0 Mar 14 05:45:27 crc kubenswrapper[4817]: I0314 05:45:27.988653 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" event={"ID":"ee827efa-cd10-4520-928c-42dbfd6ab1ef","Type":"ContainerDied","Data":"7e170a008f763f5dd2ca50d1060ddfd730aaf3dff50595e3a567d7d7652da50f"} Mar 14 05:45:28 crc kubenswrapper[4817]: I0314 05:45:28.997721 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerStarted","Data":"1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84"} Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.268684 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.289843 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wklnt" podStartSLOduration=2.883048061 podStartE2EDuration="5.289824949s" podCreationTimestamp="2026-03-14 05:45:24 +0000 UTC" firstStartedPulling="2026-03-14 05:45:25.970942232 +0000 UTC m=+780.009202978" lastFinishedPulling="2026-03-14 05:45:28.3777191 +0000 UTC m=+782.415979866" observedRunningTime="2026-03-14 05:45:29.017139063 +0000 UTC m=+783.055399809" watchObservedRunningTime="2026-03-14 05:45:29.289824949 +0000 UTC m=+783.328085705" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.381148 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlmh\" (UniqueName: \"kubernetes.io/projected/ee827efa-cd10-4520-928c-42dbfd6ab1ef-kube-api-access-9xlmh\") pod \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.381211 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-bundle\") pod \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.381247 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-util\") pod \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\" (UID: \"ee827efa-cd10-4520-928c-42dbfd6ab1ef\") " Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.381971 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-bundle" (OuterVolumeSpecName: "bundle") pod "ee827efa-cd10-4520-928c-42dbfd6ab1ef" (UID: "ee827efa-cd10-4520-928c-42dbfd6ab1ef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.391589 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-util" (OuterVolumeSpecName: "util") pod "ee827efa-cd10-4520-928c-42dbfd6ab1ef" (UID: "ee827efa-cd10-4520-928c-42dbfd6ab1ef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.391957 4817 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.391984 4817 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee827efa-cd10-4520-928c-42dbfd6ab1ef-util\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.393148 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee827efa-cd10-4520-928c-42dbfd6ab1ef-kube-api-access-9xlmh" (OuterVolumeSpecName: "kube-api-access-9xlmh") pod "ee827efa-cd10-4520-928c-42dbfd6ab1ef" (UID: "ee827efa-cd10-4520-928c-42dbfd6ab1ef"). InnerVolumeSpecName "kube-api-access-9xlmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:45:29 crc kubenswrapper[4817]: I0314 05:45:29.493350 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlmh\" (UniqueName: \"kubernetes.io/projected/ee827efa-cd10-4520-928c-42dbfd6ab1ef-kube-api-access-9xlmh\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:30 crc kubenswrapper[4817]: I0314 05:45:30.005375 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" Mar 14 05:45:30 crc kubenswrapper[4817]: I0314 05:45:30.006290 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk" event={"ID":"ee827efa-cd10-4520-928c-42dbfd6ab1ef","Type":"ContainerDied","Data":"5fb7079d3157b3031246a9987a9379c04a54bd42f4fead526ba3b02266ca9efc"} Mar 14 05:45:30 crc kubenswrapper[4817]: I0314 05:45:30.006348 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fb7079d3157b3031246a9987a9379c04a54bd42f4fead526ba3b02266ca9efc" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.959234 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-dnl76"] Mar 14 05:45:32 crc kubenswrapper[4817]: E0314 05:45:32.959639 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="util" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.959651 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="util" Mar 14 05:45:32 crc kubenswrapper[4817]: E0314 05:45:32.959664 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="extract" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.959670 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="extract" Mar 14 05:45:32 crc kubenswrapper[4817]: E0314 05:45:32.959681 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="pull" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.959687 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="pull" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.959773 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee827efa-cd10-4520-928c-42dbfd6ab1ef" containerName="extract" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.960145 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.962264 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rphvf" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.962368 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.962490 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 14 05:45:32 crc kubenswrapper[4817]: I0314 05:45:32.974646 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-dnl76"] Mar 14 05:45:33 crc kubenswrapper[4817]: I0314 05:45:33.043865 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpwmv\" (UniqueName: \"kubernetes.io/projected/1bf623dc-b3e0-45af-9273-bc1367d82ab3-kube-api-access-kpwmv\") pod \"nmstate-operator-796d4cfff4-dnl76\" (UID: \"1bf623dc-b3e0-45af-9273-bc1367d82ab3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" Mar 14 05:45:33 crc kubenswrapper[4817]: I0314 05:45:33.145084 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpwmv\" (UniqueName: \"kubernetes.io/projected/1bf623dc-b3e0-45af-9273-bc1367d82ab3-kube-api-access-kpwmv\") pod \"nmstate-operator-796d4cfff4-dnl76\" (UID: \"1bf623dc-b3e0-45af-9273-bc1367d82ab3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" Mar 14 05:45:33 crc kubenswrapper[4817]: I0314 05:45:33.166948 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpwmv\" (UniqueName: \"kubernetes.io/projected/1bf623dc-b3e0-45af-9273-bc1367d82ab3-kube-api-access-kpwmv\") pod \"nmstate-operator-796d4cfff4-dnl76\" (UID: \"1bf623dc-b3e0-45af-9273-bc1367d82ab3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" Mar 14 05:45:33 crc kubenswrapper[4817]: I0314 05:45:33.277073 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" Mar 14 05:45:33 crc kubenswrapper[4817]: I0314 05:45:33.480286 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-dnl76"] Mar 14 05:45:34 crc kubenswrapper[4817]: I0314 05:45:34.026665 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" event={"ID":"1bf623dc-b3e0-45af-9273-bc1367d82ab3","Type":"ContainerStarted","Data":"a20b739e71da2e91d8f56a9ec4b5a2673c321b43677688c1cfb4bec2fa62964e"} Mar 14 05:45:34 crc kubenswrapper[4817]: I0314 05:45:34.643417 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:34 crc kubenswrapper[4817]: I0314 05:45:34.643474 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:35 crc kubenswrapper[4817]: I0314 05:45:35.691268 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wklnt" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="registry-server" probeResult="failure" output=< Mar 14 05:45:35 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:45:35 crc kubenswrapper[4817]: > Mar 14 05:45:40 crc kubenswrapper[4817]: I0314 05:45:40.473442 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" event={"ID":"1bf623dc-b3e0-45af-9273-bc1367d82ab3","Type":"ContainerStarted","Data":"56d96be7d5b592d574e1a278d98df74a381d6d9932e38b0904eef3e23b9ec475"} Mar 14 05:45:40 crc kubenswrapper[4817]: I0314 05:45:40.490065 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-dnl76" podStartSLOduration=2.058237406 podStartE2EDuration="8.490047606s" podCreationTimestamp="2026-03-14 05:45:32 +0000 UTC" firstStartedPulling="2026-03-14 05:45:33.571958782 +0000 UTC m=+787.610219528" lastFinishedPulling="2026-03-14 05:45:40.003768972 +0000 UTC m=+794.042029728" observedRunningTime="2026-03-14 05:45:40.487288388 +0000 UTC m=+794.525549134" watchObservedRunningTime="2026-03-14 05:45:40.490047606 +0000 UTC m=+794.528308352" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.344016 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.345029 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.346888 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-g6gvk" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.348787 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-d565t"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.349603 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.350781 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.359944 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.366830 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-d565t"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.375220 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bkggf"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.375881 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397077 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87gsr\" (UniqueName: \"kubernetes.io/projected/6b497a2e-9cc7-4484-963f-b2e84eb3681a-kube-api-access-87gsr\") pod \"nmstate-metrics-9b8c8685d-8frq4\" (UID: \"6b497a2e-9cc7-4484-963f-b2e84eb3681a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397306 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-nmstate-lock\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397395 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccadd686-fc95-409d-b7a4-b2e797265e56-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397489 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46vg9\" (UniqueName: \"kubernetes.io/projected/ccadd686-fc95-409d-b7a4-b2e797265e56-kube-api-access-46vg9\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397580 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-dbus-socket\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397692 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2pv\" (UniqueName: \"kubernetes.io/projected/c99a8dbf-19cc-401a-8569-0add8d2a31bb-kube-api-access-zk2pv\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.397790 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-ovs-socket\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.464455 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.465287 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.470965 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.471014 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.471158 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7gmlj" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.471875 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.499500 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87gsr\" (UniqueName: \"kubernetes.io/projected/6b497a2e-9cc7-4484-963f-b2e84eb3681a-kube-api-access-87gsr\") pod \"nmstate-metrics-9b8c8685d-8frq4\" (UID: \"6b497a2e-9cc7-4484-963f-b2e84eb3681a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500009 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-nmstate-lock\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500083 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccadd686-fc95-409d-b7a4-b2e797265e56-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500188 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2adf871a-8f81-480f-9f20-afe8cfeb93f5-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500300 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46vg9\" (UniqueName: \"kubernetes.io/projected/ccadd686-fc95-409d-b7a4-b2e797265e56-kube-api-access-46vg9\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500118 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-nmstate-lock\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: E0314 05:45:41.500218 4817 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 14 05:45:41 crc kubenswrapper[4817]: E0314 05:45:41.500503 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ccadd686-fc95-409d-b7a4-b2e797265e56-tls-key-pair podName:ccadd686-fc95-409d-b7a4-b2e797265e56 nodeName:}" failed. No retries permitted until 2026-03-14 05:45:42.000484164 +0000 UTC m=+796.038744910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ccadd686-fc95-409d-b7a4-b2e797265e56-tls-key-pair") pod "nmstate-webhook-5f558f5558-d565t" (UID: "ccadd686-fc95-409d-b7a4-b2e797265e56") : secret "openshift-nmstate-webhook" not found Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500418 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h648f\" (UniqueName: \"kubernetes.io/projected/2adf871a-8f81-480f-9f20-afe8cfeb93f5-kube-api-access-h648f\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500594 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-dbus-socket\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500782 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-dbus-socket\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500623 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2adf871a-8f81-480f-9f20-afe8cfeb93f5-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500844 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2pv\" (UniqueName: \"kubernetes.io/projected/c99a8dbf-19cc-401a-8569-0add8d2a31bb-kube-api-access-zk2pv\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.500908 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-ovs-socket\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.501023 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c99a8dbf-19cc-401a-8569-0add8d2a31bb-ovs-socket\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.519538 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87gsr\" (UniqueName: \"kubernetes.io/projected/6b497a2e-9cc7-4484-963f-b2e84eb3681a-kube-api-access-87gsr\") pod \"nmstate-metrics-9b8c8685d-8frq4\" (UID: \"6b497a2e-9cc7-4484-963f-b2e84eb3681a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.532339 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2pv\" (UniqueName: \"kubernetes.io/projected/c99a8dbf-19cc-401a-8569-0add8d2a31bb-kube-api-access-zk2pv\") pod \"nmstate-handler-bkggf\" (UID: \"c99a8dbf-19cc-401a-8569-0add8d2a31bb\") " pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.532355 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46vg9\" (UniqueName: \"kubernetes.io/projected/ccadd686-fc95-409d-b7a4-b2e797265e56-kube-api-access-46vg9\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.602530 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h648f\" (UniqueName: \"kubernetes.io/projected/2adf871a-8f81-480f-9f20-afe8cfeb93f5-kube-api-access-h648f\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.602575 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2adf871a-8f81-480f-9f20-afe8cfeb93f5-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.602655 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2adf871a-8f81-480f-9f20-afe8cfeb93f5-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.603425 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2adf871a-8f81-480f-9f20-afe8cfeb93f5-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: E0314 05:45:41.603689 4817 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 14 05:45:41 crc kubenswrapper[4817]: E0314 05:45:41.603727 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2adf871a-8f81-480f-9f20-afe8cfeb93f5-plugin-serving-cert podName:2adf871a-8f81-480f-9f20-afe8cfeb93f5 nodeName:}" failed. No retries permitted until 2026-03-14 05:45:42.103715775 +0000 UTC m=+796.141976521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2adf871a-8f81-480f-9f20-afe8cfeb93f5-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-q5lsz" (UID: "2adf871a-8f81-480f-9f20-afe8cfeb93f5") : secret "plugin-serving-cert" not found Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.626243 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h648f\" (UniqueName: \"kubernetes.io/projected/2adf871a-8f81-480f-9f20-afe8cfeb93f5-kube-api-access-h648f\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.662419 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.664874 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76cf5f446b-ztdsp"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.665705 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.693098 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.699057 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76cf5f446b-ztdsp"] Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704039 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-oauth-serving-cert\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704099 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-trusted-ca-bundle\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704159 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9z4k\" (UniqueName: \"kubernetes.io/projected/fdb01e36-5837-435a-ad0f-739075089e5f-kube-api-access-z9z4k\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704185 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdb01e36-5837-435a-ad0f-739075089e5f-console-oauth-config\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704215 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-service-ca\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704249 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-console-config\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.704306 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdb01e36-5837-435a-ad0f-739075089e5f-console-serving-cert\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: W0314 05:45:41.732505 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc99a8dbf_19cc_401a_8569_0add8d2a31bb.slice/crio-141f59304bb9cd2606f8ea3a0a8e11d9bfe65b582cd38db13d97159e029273b4 WatchSource:0}: Error finding container 141f59304bb9cd2606f8ea3a0a8e11d9bfe65b582cd38db13d97159e029273b4: Status 404 returned error can't find the container with id 141f59304bb9cd2606f8ea3a0a8e11d9bfe65b582cd38db13d97159e029273b4 Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805752 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-service-ca\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805799 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-console-config\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805848 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdb01e36-5837-435a-ad0f-739075089e5f-console-serving-cert\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805910 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-oauth-serving-cert\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805934 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-trusted-ca-bundle\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805965 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9z4k\" (UniqueName: \"kubernetes.io/projected/fdb01e36-5837-435a-ad0f-739075089e5f-kube-api-access-z9z4k\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.805981 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdb01e36-5837-435a-ad0f-739075089e5f-console-oauth-config\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.808613 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-trusted-ca-bundle\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.808661 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-oauth-serving-cert\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.809085 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-service-ca\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.809195 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdb01e36-5837-435a-ad0f-739075089e5f-console-config\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.809473 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdb01e36-5837-435a-ad0f-739075089e5f-console-oauth-config\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.810204 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdb01e36-5837-435a-ad0f-739075089e5f-console-serving-cert\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.824002 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9z4k\" (UniqueName: \"kubernetes.io/projected/fdb01e36-5837-435a-ad0f-739075089e5f-kube-api-access-z9z4k\") pod \"console-76cf5f446b-ztdsp\" (UID: \"fdb01e36-5837-435a-ad0f-739075089e5f\") " pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:41 crc kubenswrapper[4817]: I0314 05:45:41.989468 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.010480 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccadd686-fc95-409d-b7a4-b2e797265e56-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.013796 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ccadd686-fc95-409d-b7a4-b2e797265e56-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-d565t\" (UID: \"ccadd686-fc95-409d-b7a4-b2e797265e56\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.112021 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2adf871a-8f81-480f-9f20-afe8cfeb93f5-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.114945 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2adf871a-8f81-480f-9f20-afe8cfeb93f5-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-q5lsz\" (UID: \"2adf871a-8f81-480f-9f20-afe8cfeb93f5\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.148326 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4"] Mar 14 05:45:42 crc kubenswrapper[4817]: W0314 05:45:42.157491 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b497a2e_9cc7_4484_963f_b2e84eb3681a.slice/crio-4071b966e32e0802093b0742356b355183826fc1dff52941e4375cfdb870a66a WatchSource:0}: Error finding container 4071b966e32e0802093b0742356b355183826fc1dff52941e4375cfdb870a66a: Status 404 returned error can't find the container with id 4071b966e32e0802093b0742356b355183826fc1dff52941e4375cfdb870a66a Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.173788 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76cf5f446b-ztdsp"] Mar 14 05:45:42 crc kubenswrapper[4817]: W0314 05:45:42.175718 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb01e36_5837_435a_ad0f_739075089e5f.slice/crio-53893d819d0cde5d9f2bc82c028e50645ed8f33904e725a17c9e4b9244446c64 WatchSource:0}: Error finding container 53893d819d0cde5d9f2bc82c028e50645ed8f33904e725a17c9e4b9244446c64: Status 404 returned error can't find the container with id 53893d819d0cde5d9f2bc82c028e50645ed8f33904e725a17c9e4b9244446c64 Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.274729 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.379196 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.491034 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" event={"ID":"6b497a2e-9cc7-4484-963f-b2e84eb3681a","Type":"ContainerStarted","Data":"4071b966e32e0802093b0742356b355183826fc1dff52941e4375cfdb870a66a"} Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.492598 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cf5f446b-ztdsp" event={"ID":"fdb01e36-5837-435a-ad0f-739075089e5f","Type":"ContainerStarted","Data":"f01b2c180787f3161248ffcb1ac9a555e96a7e59d847cd6a9599151a9b31792a"} Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.492641 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cf5f446b-ztdsp" event={"ID":"fdb01e36-5837-435a-ad0f-739075089e5f","Type":"ContainerStarted","Data":"53893d819d0cde5d9f2bc82c028e50645ed8f33904e725a17c9e4b9244446c64"} Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.493313 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bkggf" event={"ID":"c99a8dbf-19cc-401a-8569-0add8d2a31bb","Type":"ContainerStarted","Data":"141f59304bb9cd2606f8ea3a0a8e11d9bfe65b582cd38db13d97159e029273b4"} Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.802756 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76cf5f446b-ztdsp" podStartSLOduration=1.802737063 podStartE2EDuration="1.802737063s" podCreationTimestamp="2026-03-14 05:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:45:42.514171405 +0000 UTC m=+796.552432201" watchObservedRunningTime="2026-03-14 05:45:42.802737063 +0000 UTC m=+796.840997819" Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.807668 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz"] Mar 14 05:45:42 crc kubenswrapper[4817]: W0314 05:45:42.814978 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2adf871a_8f81_480f_9f20_afe8cfeb93f5.slice/crio-4229a3dd927de48bfcef1732aed93ba0f6eb3e7f7a8f6863e02ac8092689bd03 WatchSource:0}: Error finding container 4229a3dd927de48bfcef1732aed93ba0f6eb3e7f7a8f6863e02ac8092689bd03: Status 404 returned error can't find the container with id 4229a3dd927de48bfcef1732aed93ba0f6eb3e7f7a8f6863e02ac8092689bd03 Mar 14 05:45:42 crc kubenswrapper[4817]: I0314 05:45:42.827048 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-d565t"] Mar 14 05:45:42 crc kubenswrapper[4817]: W0314 05:45:42.831554 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccadd686_fc95_409d_b7a4_b2e797265e56.slice/crio-354b9675aa56f69bd3e2a5bafcf34715756986f4ae629f60d45225588b6189ec WatchSource:0}: Error finding container 354b9675aa56f69bd3e2a5bafcf34715756986f4ae629f60d45225588b6189ec: Status 404 returned error can't find the container with id 354b9675aa56f69bd3e2a5bafcf34715756986f4ae629f60d45225588b6189ec Mar 14 05:45:43 crc kubenswrapper[4817]: I0314 05:45:43.992781 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" event={"ID":"2adf871a-8f81-480f-9f20-afe8cfeb93f5","Type":"ContainerStarted","Data":"4229a3dd927de48bfcef1732aed93ba0f6eb3e7f7a8f6863e02ac8092689bd03"} Mar 14 05:45:43 crc kubenswrapper[4817]: I0314 05:45:43.995073 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" event={"ID":"ccadd686-fc95-409d-b7a4-b2e797265e56","Type":"ContainerStarted","Data":"354b9675aa56f69bd3e2a5bafcf34715756986f4ae629f60d45225588b6189ec"} Mar 14 05:45:44 crc kubenswrapper[4817]: I0314 05:45:44.689949 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:44 crc kubenswrapper[4817]: I0314 05:45:44.744318 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:44 crc kubenswrapper[4817]: I0314 05:45:44.919928 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wklnt"] Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.006459 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wklnt" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="registry-server" containerID="cri-o://1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84" gracePeriod=2 Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.373305 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.700360 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-utilities\") pod \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.700463 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xjjs\" (UniqueName: \"kubernetes.io/projected/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-kube-api-access-5xjjs\") pod \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.700606 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-catalog-content\") pod \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\" (UID: \"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008\") " Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.703079 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-utilities" (OuterVolumeSpecName: "utilities") pod "1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" (UID: "1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.715106 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-kube-api-access-5xjjs" (OuterVolumeSpecName: "kube-api-access-5xjjs") pod "1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" (UID: "1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008"). InnerVolumeSpecName "kube-api-access-5xjjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.804236 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.804270 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xjjs\" (UniqueName: \"kubernetes.io/projected/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-kube-api-access-5xjjs\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.850584 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" (UID: "1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:45:46 crc kubenswrapper[4817]: I0314 05:45:46.905131 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.012503 4817 generic.go:334] "Generic (PLEG): container finished" podID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerID="1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84" exitCode=0 Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.012540 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerDied","Data":"1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84"} Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.012565 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wklnt" event={"ID":"1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008","Type":"ContainerDied","Data":"45986c850eb0334aa46611c9acf7cdff4e7c289f1323138929d4246296606ad0"} Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.012581 4817 scope.go:117] "RemoveContainer" containerID="1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.012583 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wklnt" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.027263 4817 scope.go:117] "RemoveContainer" containerID="69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.040018 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wklnt"] Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.042610 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wklnt"] Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.043625 4817 scope.go:117] "RemoveContainer" containerID="552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.079533 4817 scope.go:117] "RemoveContainer" containerID="1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84" Mar 14 05:45:47 crc kubenswrapper[4817]: E0314 05:45:47.079954 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84\": container with ID starting with 1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84 not found: ID does not exist" containerID="1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.079999 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84"} err="failed to get container status \"1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84\": rpc error: code = NotFound desc = could not find container \"1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84\": container with ID starting with 1bd9e33dde3c965ecb74b315193ec201b709486bce70f45482bb71626aba0a84 not found: ID does not exist" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.080025 4817 scope.go:117] "RemoveContainer" containerID="69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e" Mar 14 05:45:47 crc kubenswrapper[4817]: E0314 05:45:47.080376 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e\": container with ID starting with 69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e not found: ID does not exist" containerID="69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.080429 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e"} err="failed to get container status \"69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e\": rpc error: code = NotFound desc = could not find container \"69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e\": container with ID starting with 69f67bb9048e294c1b536a1ad5a3b6a9d2cc4c4b73ced47d07b5531d54dfa04e not found: ID does not exist" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.080444 4817 scope.go:117] "RemoveContainer" containerID="552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad" Mar 14 05:45:47 crc kubenswrapper[4817]: E0314 05:45:47.080689 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad\": container with ID starting with 552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad not found: ID does not exist" containerID="552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad" Mar 14 05:45:47 crc kubenswrapper[4817]: I0314 05:45:47.080723 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad"} err="failed to get container status \"552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad\": rpc error: code = NotFound desc = could not find container \"552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad\": container with ID starting with 552a91c9115c70771afd589f8300489f299cf5f4b315be03a3798cb1964659ad not found: ID does not exist" Mar 14 05:45:48 crc kubenswrapper[4817]: I0314 05:45:48.739886 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" path="/var/lib/kubelet/pods/1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008/volumes" Mar 14 05:45:49 crc kubenswrapper[4817]: I0314 05:45:49.025860 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" event={"ID":"2adf871a-8f81-480f-9f20-afe8cfeb93f5","Type":"ContainerStarted","Data":"129fb2146534826b18a529b381e3495a1641f5235d2402fa9ad6804fa0f7971e"} Mar 14 05:45:49 crc kubenswrapper[4817]: I0314 05:45:49.043634 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-q5lsz" podStartSLOduration=2.765469897 podStartE2EDuration="8.043594231s" podCreationTimestamp="2026-03-14 05:45:41 +0000 UTC" firstStartedPulling="2026-03-14 05:45:42.821060316 +0000 UTC m=+796.859321052" lastFinishedPulling="2026-03-14 05:45:48.09918463 +0000 UTC m=+802.137445386" observedRunningTime="2026-03-14 05:45:49.039048601 +0000 UTC m=+803.077309347" watchObservedRunningTime="2026-03-14 05:45:49.043594231 +0000 UTC m=+803.081854977" Mar 14 05:45:51 crc kubenswrapper[4817]: I0314 05:45:51.989837 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:51 crc kubenswrapper[4817]: I0314 05:45:51.990535 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:51 crc kubenswrapper[4817]: I0314 05:45:51.999404 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:52 crc kubenswrapper[4817]: I0314 05:45:52.047605 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76cf5f446b-ztdsp" Mar 14 05:45:52 crc kubenswrapper[4817]: I0314 05:45:52.102722 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jnxpm"] Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.048694 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bkggf" event={"ID":"c99a8dbf-19cc-401a-8569-0add8d2a31bb","Type":"ContainerStarted","Data":"1466d4f51540ee7a3fd0dcd7952c1463f946c3ac12e37873e57ca82f3c5fe396"} Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.049315 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.051090 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" event={"ID":"ccadd686-fc95-409d-b7a4-b2e797265e56","Type":"ContainerStarted","Data":"7715f90f1769d3f408bda182887256d0baa7d2441ad32d07eae8145e19faba54"} Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.051236 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.052560 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" event={"ID":"6b497a2e-9cc7-4484-963f-b2e84eb3681a","Type":"ContainerStarted","Data":"4057ac7c7cc1f609f43f28c97c48da6bbb351b126542956f12ed379d74d8d788"} Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.066707 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bkggf" podStartSLOduration=1.2944336459999999 podStartE2EDuration="12.066685418s" podCreationTimestamp="2026-03-14 05:45:41 +0000 UTC" firstStartedPulling="2026-03-14 05:45:41.735094998 +0000 UTC m=+795.773355744" lastFinishedPulling="2026-03-14 05:45:52.50734676 +0000 UTC m=+806.545607516" observedRunningTime="2026-03-14 05:45:53.062581761 +0000 UTC m=+807.100842517" watchObservedRunningTime="2026-03-14 05:45:53.066685418 +0000 UTC m=+807.104946164" Mar 14 05:45:53 crc kubenswrapper[4817]: I0314 05:45:53.081660 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" podStartSLOduration=2.406795637 podStartE2EDuration="12.081639235s" podCreationTimestamp="2026-03-14 05:45:41 +0000 UTC" firstStartedPulling="2026-03-14 05:45:42.834364745 +0000 UTC m=+796.872625491" lastFinishedPulling="2026-03-14 05:45:52.509208333 +0000 UTC m=+806.547469089" observedRunningTime="2026-03-14 05:45:53.081201432 +0000 UTC m=+807.119462178" watchObservedRunningTime="2026-03-14 05:45:53.081639235 +0000 UTC m=+807.119899981" Mar 14 05:45:55 crc kubenswrapper[4817]: I0314 05:45:55.068386 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" event={"ID":"6b497a2e-9cc7-4484-963f-b2e84eb3681a","Type":"ContainerStarted","Data":"42a2256e9f2c6eae1d51c0db82088f5f1a6b92ce94406d61d56b4c6371c15055"} Mar 14 05:45:55 crc kubenswrapper[4817]: I0314 05:45:55.093474 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-8frq4" podStartSLOduration=1.494993308 podStartE2EDuration="14.093452071s" podCreationTimestamp="2026-03-14 05:45:41 +0000 UTC" firstStartedPulling="2026-03-14 05:45:42.15980235 +0000 UTC m=+796.198063096" lastFinishedPulling="2026-03-14 05:45:54.758261113 +0000 UTC m=+808.796521859" observedRunningTime="2026-03-14 05:45:55.09269822 +0000 UTC m=+809.130958976" watchObservedRunningTime="2026-03-14 05:45:55.093452071 +0000 UTC m=+809.131712807" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.135980 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557786-hsfx6"] Mar 14 05:46:00 crc kubenswrapper[4817]: E0314 05:46:00.136691 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="extract-utilities" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.136710 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="extract-utilities" Mar 14 05:46:00 crc kubenswrapper[4817]: E0314 05:46:00.136725 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="extract-content" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.136735 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="extract-content" Mar 14 05:46:00 crc kubenswrapper[4817]: E0314 05:46:00.136750 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="registry-server" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.136761 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="registry-server" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.138234 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9dc3da-3b17-4bcb-adc8-3e0a2bb9a008" containerName="registry-server" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.138945 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.141510 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.141848 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.144549 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557786-hsfx6"] Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.152296 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.296093 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzprb\" (UniqueName: \"kubernetes.io/projected/7a7b6e7d-82c3-47d3-a668-15f2614066dd-kube-api-access-pzprb\") pod \"auto-csr-approver-29557786-hsfx6\" (UID: \"7a7b6e7d-82c3-47d3-a668-15f2614066dd\") " pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.397822 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzprb\" (UniqueName: \"kubernetes.io/projected/7a7b6e7d-82c3-47d3-a668-15f2614066dd-kube-api-access-pzprb\") pod \"auto-csr-approver-29557786-hsfx6\" (UID: \"7a7b6e7d-82c3-47d3-a668-15f2614066dd\") " pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.419975 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzprb\" (UniqueName: \"kubernetes.io/projected/7a7b6e7d-82c3-47d3-a668-15f2614066dd-kube-api-access-pzprb\") pod \"auto-csr-approver-29557786-hsfx6\" (UID: \"7a7b6e7d-82c3-47d3-a668-15f2614066dd\") " pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.470345 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:00 crc kubenswrapper[4817]: I0314 05:46:00.647560 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557786-hsfx6"] Mar 14 05:46:01 crc kubenswrapper[4817]: I0314 05:46:01.118853 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" event={"ID":"7a7b6e7d-82c3-47d3-a668-15f2614066dd","Type":"ContainerStarted","Data":"976353452e4d7736013c760e3302cc9f64da1831f7d193815ed51931980656c1"} Mar 14 05:46:01 crc kubenswrapper[4817]: I0314 05:46:01.738925 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bkggf" Mar 14 05:46:02 crc kubenswrapper[4817]: I0314 05:46:02.284869 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-d565t" Mar 14 05:46:03 crc kubenswrapper[4817]: I0314 05:46:03.137070 4817 generic.go:334] "Generic (PLEG): container finished" podID="7a7b6e7d-82c3-47d3-a668-15f2614066dd" containerID="41a35aa41d57554c853a735456076d0db386f20a189879575d689ee8ecf206f3" exitCode=0 Mar 14 05:46:03 crc kubenswrapper[4817]: I0314 05:46:03.137257 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" event={"ID":"7a7b6e7d-82c3-47d3-a668-15f2614066dd","Type":"ContainerDied","Data":"41a35aa41d57554c853a735456076d0db386f20a189879575d689ee8ecf206f3"} Mar 14 05:46:04 crc kubenswrapper[4817]: I0314 05:46:04.394825 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:04 crc kubenswrapper[4817]: I0314 05:46:04.555091 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzprb\" (UniqueName: \"kubernetes.io/projected/7a7b6e7d-82c3-47d3-a668-15f2614066dd-kube-api-access-pzprb\") pod \"7a7b6e7d-82c3-47d3-a668-15f2614066dd\" (UID: \"7a7b6e7d-82c3-47d3-a668-15f2614066dd\") " Mar 14 05:46:04 crc kubenswrapper[4817]: I0314 05:46:04.560680 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7b6e7d-82c3-47d3-a668-15f2614066dd-kube-api-access-pzprb" (OuterVolumeSpecName: "kube-api-access-pzprb") pod "7a7b6e7d-82c3-47d3-a668-15f2614066dd" (UID: "7a7b6e7d-82c3-47d3-a668-15f2614066dd"). InnerVolumeSpecName "kube-api-access-pzprb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:46:04 crc kubenswrapper[4817]: I0314 05:46:04.658182 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzprb\" (UniqueName: \"kubernetes.io/projected/7a7b6e7d-82c3-47d3-a668-15f2614066dd-kube-api-access-pzprb\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:05 crc kubenswrapper[4817]: I0314 05:46:05.155327 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" event={"ID":"7a7b6e7d-82c3-47d3-a668-15f2614066dd","Type":"ContainerDied","Data":"976353452e4d7736013c760e3302cc9f64da1831f7d193815ed51931980656c1"} Mar 14 05:46:05 crc kubenswrapper[4817]: I0314 05:46:05.155371 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976353452e4d7736013c760e3302cc9f64da1831f7d193815ed51931980656c1" Mar 14 05:46:05 crc kubenswrapper[4817]: I0314 05:46:05.155480 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557786-hsfx6" Mar 14 05:46:05 crc kubenswrapper[4817]: I0314 05:46:05.452112 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557780-9r62k"] Mar 14 05:46:05 crc kubenswrapper[4817]: I0314 05:46:05.461177 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557780-9r62k"] Mar 14 05:46:06 crc kubenswrapper[4817]: I0314 05:46:06.739843 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66075180-911b-408b-95ff-2f74c34580be" path="/var/lib/kubelet/pods/66075180-911b-408b-95ff-2f74c34580be/volumes" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.833548 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q"] Mar 14 05:46:15 crc kubenswrapper[4817]: E0314 05:46:15.834297 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7b6e7d-82c3-47d3-a668-15f2614066dd" containerName="oc" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.834310 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7b6e7d-82c3-47d3-a668-15f2614066dd" containerName="oc" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.834417 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7b6e7d-82c3-47d3-a668-15f2614066dd" containerName="oc" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.835191 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.837801 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.847163 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q"] Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.933567 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwbr\" (UniqueName: \"kubernetes.io/projected/8ab5ccb3-d22e-4570-a16b-c553919536d3-kube-api-access-wqwbr\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.933639 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:15 crc kubenswrapper[4817]: I0314 05:46:15.933657 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.035217 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqwbr\" (UniqueName: \"kubernetes.io/projected/8ab5ccb3-d22e-4570-a16b-c553919536d3-kube-api-access-wqwbr\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.035345 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.035383 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.036046 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.036164 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.074199 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqwbr\" (UniqueName: \"kubernetes.io/projected/8ab5ccb3-d22e-4570-a16b-c553919536d3-kube-api-access-wqwbr\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.161639 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:16 crc kubenswrapper[4817]: I0314 05:46:16.355607 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q"] Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.243131 4817 generic.go:334] "Generic (PLEG): container finished" podID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerID="c5e80fc2450000aa2019f55d3d0b005f923ed2eaaddb8d6de5761d2690913a84" exitCode=0 Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.243211 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" event={"ID":"8ab5ccb3-d22e-4570-a16b-c553919536d3","Type":"ContainerDied","Data":"c5e80fc2450000aa2019f55d3d0b005f923ed2eaaddb8d6de5761d2690913a84"} Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.243496 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" event={"ID":"8ab5ccb3-d22e-4570-a16b-c553919536d3","Type":"ContainerStarted","Data":"8699e580e64797a6a2db90a924cd1703e2c58dfdc1090f973018eefd67d97ba4"} Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.458660 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jnxpm" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerName="console" containerID="cri-o://8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c" gracePeriod=15 Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.947853 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jnxpm_ac384b90-5e6b-4477-b71a-8a8a56a29896/console/0.log" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.948374 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963001 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-config\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963115 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-trusted-ca-bundle\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963176 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-oauth-serving-cert\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963199 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkjql\" (UniqueName: \"kubernetes.io/projected/ac384b90-5e6b-4477-b71a-8a8a56a29896-kube-api-access-lkjql\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963219 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-serving-cert\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963248 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-service-ca\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963301 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-oauth-config\") pod \"ac384b90-5e6b-4477-b71a-8a8a56a29896\" (UID: \"ac384b90-5e6b-4477-b71a-8a8a56a29896\") " Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.963908 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-config" (OuterVolumeSpecName: "console-config") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.964229 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.966324 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.966744 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.980132 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac384b90-5e6b-4477-b71a-8a8a56a29896-kube-api-access-lkjql" (OuterVolumeSpecName: "kube-api-access-lkjql") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "kube-api-access-lkjql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.983490 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:46:17 crc kubenswrapper[4817]: I0314 05:46:17.983630 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac384b90-5e6b-4477-b71a-8a8a56a29896" (UID: "ac384b90-5e6b-4477-b71a-8a8a56a29896"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064756 4817 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064800 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkjql\" (UniqueName: \"kubernetes.io/projected/ac384b90-5e6b-4477-b71a-8a8a56a29896-kube-api-access-lkjql\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064813 4817 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064822 4817 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064832 4817 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064840 4817 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.064849 4817 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac384b90-5e6b-4477-b71a-8a8a56a29896-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.256257 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jnxpm_ac384b90-5e6b-4477-b71a-8a8a56a29896/console/0.log" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.256333 4817 generic.go:334] "Generic (PLEG): container finished" podID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerID="8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c" exitCode=2 Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.256369 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jnxpm" event={"ID":"ac384b90-5e6b-4477-b71a-8a8a56a29896","Type":"ContainerDied","Data":"8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c"} Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.256400 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jnxpm" event={"ID":"ac384b90-5e6b-4477-b71a-8a8a56a29896","Type":"ContainerDied","Data":"4ef82e82aec93936236e7322113f83734e45d10ee3e0b9f4c0d29cdcdcd5cd4f"} Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.256428 4817 scope.go:117] "RemoveContainer" containerID="8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.256553 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jnxpm" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.290605 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jnxpm"] Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.294123 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jnxpm"] Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.396624 4817 scope.go:117] "RemoveContainer" containerID="8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c" Mar 14 05:46:18 crc kubenswrapper[4817]: E0314 05:46:18.397321 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c\": container with ID starting with 8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c not found: ID does not exist" containerID="8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.397401 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c"} err="failed to get container status \"8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c\": rpc error: code = NotFound desc = could not find container \"8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c\": container with ID starting with 8700b1fc8d4b2b7b8376b36dcced58b3fed46d9f03ac4868266fbcb86504da1c not found: ID does not exist" Mar 14 05:46:18 crc kubenswrapper[4817]: I0314 05:46:18.738409 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" path="/var/lib/kubelet/pods/ac384b90-5e6b-4477-b71a-8a8a56a29896/volumes" Mar 14 05:46:22 crc kubenswrapper[4817]: I0314 05:46:22.696730 4817 generic.go:334] "Generic (PLEG): container finished" podID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerID="7504d438a90a8c4df2bfd2263e2e3a85cd7a503e7e256293ae251d0742ebc25a" exitCode=0 Mar 14 05:46:22 crc kubenswrapper[4817]: I0314 05:46:22.696868 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" event={"ID":"8ab5ccb3-d22e-4570-a16b-c553919536d3","Type":"ContainerDied","Data":"7504d438a90a8c4df2bfd2263e2e3a85cd7a503e7e256293ae251d0742ebc25a"} Mar 14 05:46:23 crc kubenswrapper[4817]: I0314 05:46:23.705801 4817 generic.go:334] "Generic (PLEG): container finished" podID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerID="b505c7ee0b8929cd62438f7dfdea415b196a06348b7d2eaf07e6ccd280b82095" exitCode=0 Mar 14 05:46:23 crc kubenswrapper[4817]: I0314 05:46:23.705850 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" event={"ID":"8ab5ccb3-d22e-4570-a16b-c553919536d3","Type":"ContainerDied","Data":"b505c7ee0b8929cd62438f7dfdea415b196a06348b7d2eaf07e6ccd280b82095"} Mar 14 05:46:24 crc kubenswrapper[4817]: I0314 05:46:24.958060 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.128415 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-util\") pod \"8ab5ccb3-d22e-4570-a16b-c553919536d3\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.128465 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-bundle\") pod \"8ab5ccb3-d22e-4570-a16b-c553919536d3\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.128523 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqwbr\" (UniqueName: \"kubernetes.io/projected/8ab5ccb3-d22e-4570-a16b-c553919536d3-kube-api-access-wqwbr\") pod \"8ab5ccb3-d22e-4570-a16b-c553919536d3\" (UID: \"8ab5ccb3-d22e-4570-a16b-c553919536d3\") " Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.129991 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-bundle" (OuterVolumeSpecName: "bundle") pod "8ab5ccb3-d22e-4570-a16b-c553919536d3" (UID: "8ab5ccb3-d22e-4570-a16b-c553919536d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.143514 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab5ccb3-d22e-4570-a16b-c553919536d3-kube-api-access-wqwbr" (OuterVolumeSpecName: "kube-api-access-wqwbr") pod "8ab5ccb3-d22e-4570-a16b-c553919536d3" (UID: "8ab5ccb3-d22e-4570-a16b-c553919536d3"). InnerVolumeSpecName "kube-api-access-wqwbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.151570 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-util" (OuterVolumeSpecName: "util") pod "8ab5ccb3-d22e-4570-a16b-c553919536d3" (UID: "8ab5ccb3-d22e-4570-a16b-c553919536d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.230039 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqwbr\" (UniqueName: \"kubernetes.io/projected/8ab5ccb3-d22e-4570-a16b-c553919536d3-kube-api-access-wqwbr\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.230091 4817 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-util\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.230105 4817 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab5ccb3-d22e-4570-a16b-c553919536d3-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.723002 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" event={"ID":"8ab5ccb3-d22e-4570-a16b-c553919536d3","Type":"ContainerDied","Data":"8699e580e64797a6a2db90a924cd1703e2c58dfdc1090f973018eefd67d97ba4"} Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.723055 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8699e580e64797a6a2db90a924cd1703e2c58dfdc1090f973018eefd67d97ba4" Mar 14 05:46:25 crc kubenswrapper[4817]: I0314 05:46:25.723125 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q" Mar 14 05:46:38 crc kubenswrapper[4817]: I0314 05:46:38.565104 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:46:38 crc kubenswrapper[4817]: I0314 05:46:38.565633 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.305494 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b"] Mar 14 05:46:39 crc kubenswrapper[4817]: E0314 05:46:39.306065 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="util" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306083 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="util" Mar 14 05:46:39 crc kubenswrapper[4817]: E0314 05:46:39.306098 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="pull" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306106 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="pull" Mar 14 05:46:39 crc kubenswrapper[4817]: E0314 05:46:39.306121 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerName="console" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306129 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerName="console" Mar 14 05:46:39 crc kubenswrapper[4817]: E0314 05:46:39.306151 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="extract" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306159 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="extract" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306272 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac384b90-5e6b-4477-b71a-8a8a56a29896" containerName="console" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306288 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab5ccb3-d22e-4570-a16b-c553919536d3" containerName="extract" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.306842 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.312361 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.312394 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-kxr8d" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.312608 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.314291 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.319347 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.338994 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b"] Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.439270 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-apiservice-cert\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.439788 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2dq2\" (UniqueName: \"kubernetes.io/projected/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-kube-api-access-z2dq2\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.439820 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-webhook-cert\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.540724 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-apiservice-cert\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.541777 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2dq2\" (UniqueName: \"kubernetes.io/projected/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-kube-api-access-z2dq2\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.541810 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-webhook-cert\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.549367 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-apiservice-cert\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.549798 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-webhook-cert\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.569033 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2dq2\" (UniqueName: \"kubernetes.io/projected/55748352-cae5-4b0d-8d5d-ed70b1e62fbd-kube-api-access-z2dq2\") pod \"metallb-operator-controller-manager-6f7859bbb-rtk7b\" (UID: \"55748352-cae5-4b0d-8d5d-ed70b1e62fbd\") " pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.623915 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.765292 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf"] Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.766025 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.776348 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ll5ds" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.776617 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.776768 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.781933 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf"] Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.849705 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltztv\" (UniqueName: \"kubernetes.io/projected/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-kube-api-access-ltztv\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.850040 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-webhook-cert\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.850120 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-apiservice-cert\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.895060 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b"] Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.951637 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-apiservice-cert\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.951701 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltztv\" (UniqueName: \"kubernetes.io/projected/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-kube-api-access-ltztv\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.951736 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-webhook-cert\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.957986 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-webhook-cert\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.960501 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-apiservice-cert\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:39 crc kubenswrapper[4817]: I0314 05:46:39.980688 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltztv\" (UniqueName: \"kubernetes.io/projected/707a2b72-26b7-48a9-b7e6-dcf7989deb6b-kube-api-access-ltztv\") pod \"metallb-operator-webhook-server-757d57bdfc-8q5gf\" (UID: \"707a2b72-26b7-48a9-b7e6-dcf7989deb6b\") " pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:40 crc kubenswrapper[4817]: I0314 05:46:40.097186 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:40 crc kubenswrapper[4817]: I0314 05:46:40.365089 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf"] Mar 14 05:46:40 crc kubenswrapper[4817]: I0314 05:46:40.813368 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" event={"ID":"55748352-cae5-4b0d-8d5d-ed70b1e62fbd","Type":"ContainerStarted","Data":"d9b74a81d65b21471f66fa27e530c3394ff7ba1582204bd36607a21a524228c5"} Mar 14 05:46:40 crc kubenswrapper[4817]: I0314 05:46:40.815297 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" event={"ID":"707a2b72-26b7-48a9-b7e6-dcf7989deb6b","Type":"ContainerStarted","Data":"a76b68d48af12914ed9bf4614177295a3e5df217cfb0aeb4d84e8d7c573991eb"} Mar 14 05:46:41 crc kubenswrapper[4817]: I0314 05:46:41.417023 4817 patch_prober.go:28] interesting pod/dns-default-zc2cb container/dns namespace/openshift-dns: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=kubernetes Mar 14 05:46:41 crc kubenswrapper[4817]: I0314 05:46:41.417084 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-zc2cb" podUID="f686c00d-9b3f-4ad0-a44a-2a27218f9d3c" containerName="dns" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 14 05:46:46 crc kubenswrapper[4817]: I0314 05:46:46.852772 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" event={"ID":"707a2b72-26b7-48a9-b7e6-dcf7989deb6b","Type":"ContainerStarted","Data":"e4f73fed4dccc8cb3c44d22ae6a6a8c5c17577c19bf47dcefb146ae6db3f8082"} Mar 14 05:46:46 crc kubenswrapper[4817]: I0314 05:46:46.853306 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:46:46 crc kubenswrapper[4817]: I0314 05:46:46.854689 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" event={"ID":"55748352-cae5-4b0d-8d5d-ed70b1e62fbd","Type":"ContainerStarted","Data":"74cf8373a7bc55934467c1a3395d43244cd05c7c1b21a7ec22e1c83a8f0873a8"} Mar 14 05:46:46 crc kubenswrapper[4817]: I0314 05:46:46.854842 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:46:46 crc kubenswrapper[4817]: I0314 05:46:46.877851 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" podStartSLOduration=1.729996892 podStartE2EDuration="7.877828005s" podCreationTimestamp="2026-03-14 05:46:39 +0000 UTC" firstStartedPulling="2026-03-14 05:46:40.377288935 +0000 UTC m=+854.415549691" lastFinishedPulling="2026-03-14 05:46:46.525120058 +0000 UTC m=+860.563380804" observedRunningTime="2026-03-14 05:46:46.874997385 +0000 UTC m=+860.913258131" watchObservedRunningTime="2026-03-14 05:46:46.877828005 +0000 UTC m=+860.916088751" Mar 14 05:46:46 crc kubenswrapper[4817]: I0314 05:46:46.895793 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" podStartSLOduration=1.2851984490000001 podStartE2EDuration="7.895775987s" podCreationTimestamp="2026-03-14 05:46:39 +0000 UTC" firstStartedPulling="2026-03-14 05:46:39.907372326 +0000 UTC m=+853.945633072" lastFinishedPulling="2026-03-14 05:46:46.517949864 +0000 UTC m=+860.556210610" observedRunningTime="2026-03-14 05:46:46.893150892 +0000 UTC m=+860.931411658" watchObservedRunningTime="2026-03-14 05:46:46.895775987 +0000 UTC m=+860.934036733" Mar 14 05:46:54 crc kubenswrapper[4817]: I0314 05:46:54.064477 4817 scope.go:117] "RemoveContainer" containerID="4783521c5b9fcad1241d3ba69b6058fd186d7ddcf7cc58e7a7b76e72873daa9e" Mar 14 05:47:00 crc kubenswrapper[4817]: I0314 05:47:00.111965 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-757d57bdfc-8q5gf" Mar 14 05:47:08 crc kubenswrapper[4817]: I0314 05:47:08.565306 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:47:08 crc kubenswrapper[4817]: I0314 05:47:08.565909 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:47:19 crc kubenswrapper[4817]: I0314 05:47:19.626976 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6f7859bbb-rtk7b" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.438337 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t5x6v"] Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.441380 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5"] Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.441968 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.442471 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.450396 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.450643 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6jhg7" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.450780 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.450940 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.452709 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5"] Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.494830 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-reloader\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.494918 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-metrics\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.494937 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-frr-sockets\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.494959 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtr8\" (UniqueName: \"kubernetes.io/projected/4887eb15-670e-4460-9df4-f50ff914238e-kube-api-access-wjtr8\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.494988 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-frr-conf\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.495006 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4887eb15-670e-4460-9df4-f50ff914238e-frr-startup\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.495070 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1393ab55-8647-4411-86f7-a034c8bbd227-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.495106 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wltws\" (UniqueName: \"kubernetes.io/projected/1393ab55-8647-4411-86f7-a034c8bbd227-kube-api-access-wltws\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.495122 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4887eb15-670e-4460-9df4-f50ff914238e-metrics-certs\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603499 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wltws\" (UniqueName: \"kubernetes.io/projected/1393ab55-8647-4411-86f7-a034c8bbd227-kube-api-access-wltws\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603542 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4887eb15-670e-4460-9df4-f50ff914238e-metrics-certs\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.603630 4817 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.603684 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4887eb15-670e-4460-9df4-f50ff914238e-metrics-certs podName:4887eb15-670e-4460-9df4-f50ff914238e nodeName:}" failed. No retries permitted until 2026-03-14 05:47:21.103664588 +0000 UTC m=+895.141925344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4887eb15-670e-4460-9df4-f50ff914238e-metrics-certs") pod "frr-k8s-t5x6v" (UID: "4887eb15-670e-4460-9df4-f50ff914238e") : secret "frr-k8s-certs-secret" not found Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603818 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-reloader\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603867 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-metrics\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603888 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-frr-sockets\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603925 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtr8\" (UniqueName: \"kubernetes.io/projected/4887eb15-670e-4460-9df4-f50ff914238e-kube-api-access-wjtr8\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603961 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-frr-conf\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603975 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4887eb15-670e-4460-9df4-f50ff914238e-frr-startup\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.603997 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1393ab55-8647-4411-86f7-a034c8bbd227-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.604117 4817 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.604184 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1393ab55-8647-4411-86f7-a034c8bbd227-cert podName:1393ab55-8647-4411-86f7-a034c8bbd227 nodeName:}" failed. No retries permitted until 2026-03-14 05:47:21.104162502 +0000 UTC m=+895.142423248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1393ab55-8647-4411-86f7-a034c8bbd227-cert") pod "frr-k8s-webhook-server-bcc4b6f68-hqjc5" (UID: "1393ab55-8647-4411-86f7-a034c8bbd227") : secret "frr-k8s-webhook-server-cert" not found Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.604228 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-reloader\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.604269 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-metrics\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.604453 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-frr-sockets\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.604854 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4887eb15-670e-4460-9df4-f50ff914238e-frr-startup\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.605104 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4887eb15-670e-4460-9df4-f50ff914238e-frr-conf\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.630803 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-88cxz"] Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.631831 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.634700 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.640208 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtr8\" (UniqueName: \"kubernetes.io/projected/4887eb15-670e-4460-9df4-f50ff914238e-kube-api-access-wjtr8\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.647447 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jntvk"] Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.648311 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.651418 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-88cxz"] Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.652868 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wltws\" (UniqueName: \"kubernetes.io/projected/1393ab55-8647-4411-86f7-a034c8bbd227-kube-api-access-wltws\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.663477 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.663516 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.663725 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.663842 4817 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ct56g" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705407 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/420d83f8-e6b6-4433-8e63-ae624bcf1241-metallb-excludel2\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705465 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e421e9b6-37af-4150-8a96-419fe1e1f267-cert\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705496 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-metrics-certs\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705656 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e421e9b6-37af-4150-8a96-419fe1e1f267-metrics-certs\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705751 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705839 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvxzd\" (UniqueName: \"kubernetes.io/projected/420d83f8-e6b6-4433-8e63-ae624bcf1241-kube-api-access-bvxzd\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.705956 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqks\" (UniqueName: \"kubernetes.io/projected/e421e9b6-37af-4150-8a96-419fe1e1f267-kube-api-access-txqks\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.807062 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e421e9b6-37af-4150-8a96-419fe1e1f267-metrics-certs\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.807536 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.807694 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvxzd\" (UniqueName: \"kubernetes.io/projected/420d83f8-e6b6-4433-8e63-ae624bcf1241-kube-api-access-bvxzd\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.807816 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqks\" (UniqueName: \"kubernetes.io/projected/e421e9b6-37af-4150-8a96-419fe1e1f267-kube-api-access-txqks\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.807974 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/420d83f8-e6b6-4433-8e63-ae624bcf1241-metallb-excludel2\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.808098 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e421e9b6-37af-4150-8a96-419fe1e1f267-cert\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.808213 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-metrics-certs\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.807739 4817 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.809159 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist podName:420d83f8-e6b6-4433-8e63-ae624bcf1241 nodeName:}" failed. No retries permitted until 2026-03-14 05:47:21.308994993 +0000 UTC m=+895.347255749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist") pod "speaker-jntvk" (UID: "420d83f8-e6b6-4433-8e63-ae624bcf1241") : secret "metallb-memberlist" not found Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.808428 4817 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 14 05:47:20 crc kubenswrapper[4817]: E0314 05:47:20.809221 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-metrics-certs podName:420d83f8-e6b6-4433-8e63-ae624bcf1241 nodeName:}" failed. No retries permitted until 2026-03-14 05:47:21.309208359 +0000 UTC m=+895.347469125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-metrics-certs") pod "speaker-jntvk" (UID: "420d83f8-e6b6-4433-8e63-ae624bcf1241") : secret "speaker-certs-secret" not found Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.810119 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/420d83f8-e6b6-4433-8e63-ae624bcf1241-metallb-excludel2\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.811518 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e421e9b6-37af-4150-8a96-419fe1e1f267-metrics-certs\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.814412 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e421e9b6-37af-4150-8a96-419fe1e1f267-cert\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.841325 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqks\" (UniqueName: \"kubernetes.io/projected/e421e9b6-37af-4150-8a96-419fe1e1f267-kube-api-access-txqks\") pod \"controller-7bb4cc7c98-88cxz\" (UID: \"e421e9b6-37af-4150-8a96-419fe1e1f267\") " pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.842643 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvxzd\" (UniqueName: \"kubernetes.io/projected/420d83f8-e6b6-4433-8e63-ae624bcf1241-kube-api-access-bvxzd\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:20 crc kubenswrapper[4817]: I0314 05:47:20.970280 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.114096 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1393ab55-8647-4411-86f7-a034c8bbd227-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.116645 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4887eb15-670e-4460-9df4-f50ff914238e-metrics-certs\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.121479 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1393ab55-8647-4411-86f7-a034c8bbd227-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-hqjc5\" (UID: \"1393ab55-8647-4411-86f7-a034c8bbd227\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.121547 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4887eb15-670e-4460-9df4-f50ff914238e-metrics-certs\") pod \"frr-k8s-t5x6v\" (UID: \"4887eb15-670e-4460-9df4-f50ff914238e\") " pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.126256 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.318943 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-metrics-certs\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.319012 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:21 crc kubenswrapper[4817]: E0314 05:47:21.319140 4817 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 05:47:21 crc kubenswrapper[4817]: E0314 05:47:21.319191 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist podName:420d83f8-e6b6-4433-8e63-ae624bcf1241 nodeName:}" failed. No retries permitted until 2026-03-14 05:47:22.31917706 +0000 UTC m=+896.357437806 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist") pod "speaker-jntvk" (UID: "420d83f8-e6b6-4433-8e63-ae624bcf1241") : secret "metallb-memberlist" not found Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.328030 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-metrics-certs\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.406037 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-88cxz"] Mar 14 05:47:21 crc kubenswrapper[4817]: W0314 05:47:21.414510 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode421e9b6_37af_4150_8a96_419fe1e1f267.slice/crio-810d15437b82e7287956722e8f6d36f8cb1f9ec443bc7b3aa0f04197918926ef WatchSource:0}: Error finding container 810d15437b82e7287956722e8f6d36f8cb1f9ec443bc7b3aa0f04197918926ef: Status 404 returned error can't find the container with id 810d15437b82e7287956722e8f6d36f8cb1f9ec443bc7b3aa0f04197918926ef Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.420212 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:21 crc kubenswrapper[4817]: I0314 05:47:21.620798 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5"] Mar 14 05:47:21 crc kubenswrapper[4817]: W0314 05:47:21.625174 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1393ab55_8647_4411_86f7_a034c8bbd227.slice/crio-b3b419e6ecc28577be0509cdc1d626755f289f7b33c65fdea055873cdc2f0998 WatchSource:0}: Error finding container b3b419e6ecc28577be0509cdc1d626755f289f7b33c65fdea055873cdc2f0998: Status 404 returned error can't find the container with id b3b419e6ecc28577be0509cdc1d626755f289f7b33c65fdea055873cdc2f0998 Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.094646 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" event={"ID":"1393ab55-8647-4411-86f7-a034c8bbd227","Type":"ContainerStarted","Data":"b3b419e6ecc28577be0509cdc1d626755f289f7b33c65fdea055873cdc2f0998"} Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.097452 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-88cxz" event={"ID":"e421e9b6-37af-4150-8a96-419fe1e1f267","Type":"ContainerStarted","Data":"f0237df9caacea9292107a2426bd237de20d58036a03d4feb4ff3bd9988067fe"} Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.097492 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-88cxz" event={"ID":"e421e9b6-37af-4150-8a96-419fe1e1f267","Type":"ContainerStarted","Data":"c5ad94442d12a603317717c6e1be98cb5f1e21ac6842b96854476f77fe9ee749"} Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.097502 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-88cxz" event={"ID":"e421e9b6-37af-4150-8a96-419fe1e1f267","Type":"ContainerStarted","Data":"810d15437b82e7287956722e8f6d36f8cb1f9ec443bc7b3aa0f04197918926ef"} Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.097653 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.098568 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"3bf34ea52e9947e4f78159ef65516d9c2c02d592368ad2a654b61c56fb2ce69a"} Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.111052 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-88cxz" podStartSLOduration=2.11103188 podStartE2EDuration="2.11103188s" podCreationTimestamp="2026-03-14 05:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:47:22.109351462 +0000 UTC m=+896.147612218" watchObservedRunningTime="2026-03-14 05:47:22.11103188 +0000 UTC m=+896.149292626" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.330261 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.334803 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/420d83f8-e6b6-4433-8e63-ae624bcf1241-memberlist\") pod \"speaker-jntvk\" (UID: \"420d83f8-e6b6-4433-8e63-ae624bcf1241\") " pod="metallb-system/speaker-jntvk" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.488421 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jntvk" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.488665 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p27dn"] Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.490106 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.532326 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjqh\" (UniqueName: \"kubernetes.io/projected/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-kube-api-access-csjqh\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.532402 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-utilities\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.532657 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-catalog-content\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: W0314 05:47:22.534126 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420d83f8_e6b6_4433_8e63_ae624bcf1241.slice/crio-e69e45ee1bc82242d3bb1add73fdad008215aa8e7dcd7bb19adc4f82ce14d5a0 WatchSource:0}: Error finding container e69e45ee1bc82242d3bb1add73fdad008215aa8e7dcd7bb19adc4f82ce14d5a0: Status 404 returned error can't find the container with id e69e45ee1bc82242d3bb1add73fdad008215aa8e7dcd7bb19adc4f82ce14d5a0 Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.618834 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p27dn"] Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.633340 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-catalog-content\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.633404 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjqh\" (UniqueName: \"kubernetes.io/projected/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-kube-api-access-csjqh\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.633446 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-utilities\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.634398 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-utilities\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.634565 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-catalog-content\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.677996 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjqh\" (UniqueName: \"kubernetes.io/projected/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-kube-api-access-csjqh\") pod \"community-operators-p27dn\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:22 crc kubenswrapper[4817]: I0314 05:47:22.859845 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:23 crc kubenswrapper[4817]: I0314 05:47:23.123128 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jntvk" event={"ID":"420d83f8-e6b6-4433-8e63-ae624bcf1241","Type":"ContainerStarted","Data":"defcdcdf55c17ba1ee751c9b248355d9f8344a94a85a2ed0e6e473d94e481338"} Mar 14 05:47:23 crc kubenswrapper[4817]: I0314 05:47:23.123165 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jntvk" event={"ID":"420d83f8-e6b6-4433-8e63-ae624bcf1241","Type":"ContainerStarted","Data":"e69e45ee1bc82242d3bb1add73fdad008215aa8e7dcd7bb19adc4f82ce14d5a0"} Mar 14 05:47:23 crc kubenswrapper[4817]: I0314 05:47:23.397062 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p27dn"] Mar 14 05:47:24 crc kubenswrapper[4817]: I0314 05:47:24.135036 4817 generic.go:334] "Generic (PLEG): container finished" podID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerID="584d0102f9413d400f6d780cf4f2f08482c39327f25e8cc0d1c9c90cf8050e11" exitCode=0 Mar 14 05:47:24 crc kubenswrapper[4817]: I0314 05:47:24.135114 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p27dn" event={"ID":"850d6739-2e4e-4bed-b43b-bc11dbf91b9a","Type":"ContainerDied","Data":"584d0102f9413d400f6d780cf4f2f08482c39327f25e8cc0d1c9c90cf8050e11"} Mar 14 05:47:24 crc kubenswrapper[4817]: I0314 05:47:24.135147 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p27dn" event={"ID":"850d6739-2e4e-4bed-b43b-bc11dbf91b9a","Type":"ContainerStarted","Data":"854fb62dc087dd551ee18ae13502a5ccfb4083c42c9dc8344079b6b40c12dc61"} Mar 14 05:47:24 crc kubenswrapper[4817]: I0314 05:47:24.142167 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jntvk" event={"ID":"420d83f8-e6b6-4433-8e63-ae624bcf1241","Type":"ContainerStarted","Data":"38af3b0085f489e6df35abc042a039ee1e82c494d15eca4f9cf1ef41935c1ae1"} Mar 14 05:47:24 crc kubenswrapper[4817]: I0314 05:47:24.142823 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jntvk" Mar 14 05:47:26 crc kubenswrapper[4817]: I0314 05:47:26.163316 4817 generic.go:334] "Generic (PLEG): container finished" podID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerID="2860914e3df419fceee26f11875eb7027dadb5afd4566c83152735928baaab08" exitCode=0 Mar 14 05:47:26 crc kubenswrapper[4817]: I0314 05:47:26.164004 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p27dn" event={"ID":"850d6739-2e4e-4bed-b43b-bc11dbf91b9a","Type":"ContainerDied","Data":"2860914e3df419fceee26f11875eb7027dadb5afd4566c83152735928baaab08"} Mar 14 05:47:26 crc kubenswrapper[4817]: I0314 05:47:26.195122 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jntvk" podStartSLOduration=6.195101685 podStartE2EDuration="6.195101685s" podCreationTimestamp="2026-03-14 05:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:47:24.193281114 +0000 UTC m=+898.231541860" watchObservedRunningTime="2026-03-14 05:47:26.195101685 +0000 UTC m=+900.233362441" Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.190993 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" event={"ID":"1393ab55-8647-4411-86f7-a034c8bbd227","Type":"ContainerStarted","Data":"0ce3fa0450994fa865c81c97840026051cd4fd787b3380391bfaf75c0e60e253"} Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.191844 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.193237 4817 generic.go:334] "Generic (PLEG): container finished" podID="4887eb15-670e-4460-9df4-f50ff914238e" containerID="692451b3fb2a045d90d84cc7469778916b460caba0a46c9f071c85bcdc8037c6" exitCode=0 Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.193343 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerDied","Data":"692451b3fb2a045d90d84cc7469778916b460caba0a46c9f071c85bcdc8037c6"} Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.196632 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p27dn" event={"ID":"850d6739-2e4e-4bed-b43b-bc11dbf91b9a","Type":"ContainerStarted","Data":"11b067419c7b76535ffa063df54afc06c6d203df4a4eb9ba85455f880ae9895f"} Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.219838 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" podStartSLOduration=2.508964727 podStartE2EDuration="10.219812048s" podCreationTimestamp="2026-03-14 05:47:20 +0000 UTC" firstStartedPulling="2026-03-14 05:47:21.628268244 +0000 UTC m=+895.666528990" lastFinishedPulling="2026-03-14 05:47:29.339115565 +0000 UTC m=+903.377376311" observedRunningTime="2026-03-14 05:47:30.21251006 +0000 UTC m=+904.250770816" watchObservedRunningTime="2026-03-14 05:47:30.219812048 +0000 UTC m=+904.258072794" Mar 14 05:47:30 crc kubenswrapper[4817]: I0314 05:47:30.281108 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p27dn" podStartSLOduration=3.578558804 podStartE2EDuration="8.281064024s" podCreationTimestamp="2026-03-14 05:47:22 +0000 UTC" firstStartedPulling="2026-03-14 05:47:24.138308217 +0000 UTC m=+898.176568963" lastFinishedPulling="2026-03-14 05:47:28.840813437 +0000 UTC m=+902.879074183" observedRunningTime="2026-03-14 05:47:30.279432618 +0000 UTC m=+904.317693384" watchObservedRunningTime="2026-03-14 05:47:30.281064024 +0000 UTC m=+904.319324780" Mar 14 05:47:31 crc kubenswrapper[4817]: I0314 05:47:31.207626 4817 generic.go:334] "Generic (PLEG): container finished" podID="4887eb15-670e-4460-9df4-f50ff914238e" containerID="76061d4a63a168ce8e66a23b3250556524fe1c40f63db916914d861aacd565f6" exitCode=0 Mar 14 05:47:31 crc kubenswrapper[4817]: I0314 05:47:31.207668 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerDied","Data":"76061d4a63a168ce8e66a23b3250556524fe1c40f63db916914d861aacd565f6"} Mar 14 05:47:32 crc kubenswrapper[4817]: I0314 05:47:32.218140 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"5e9dd8102e1797e4eb1f8005f7264b57bc85ed718ba78edc0a773576ff9edaf1"} Mar 14 05:47:32 crc kubenswrapper[4817]: I0314 05:47:32.492852 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jntvk" Mar 14 05:47:32 crc kubenswrapper[4817]: I0314 05:47:32.860085 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:32 crc kubenswrapper[4817]: I0314 05:47:32.860405 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:32 crc kubenswrapper[4817]: I0314 05:47:32.909110 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:33 crc kubenswrapper[4817]: I0314 05:47:33.226941 4817 generic.go:334] "Generic (PLEG): container finished" podID="4887eb15-670e-4460-9df4-f50ff914238e" containerID="5e9dd8102e1797e4eb1f8005f7264b57bc85ed718ba78edc0a773576ff9edaf1" exitCode=0 Mar 14 05:47:33 crc kubenswrapper[4817]: I0314 05:47:33.227013 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerDied","Data":"5e9dd8102e1797e4eb1f8005f7264b57bc85ed718ba78edc0a773576ff9edaf1"} Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.239123 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"ab6ba83cff37c62c0922dabba4d0e734d336a733b4421fb5ff03547960c43e86"} Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.239186 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"4aad9480f041c4812077c1720e6a409d5828cd24c3f7af4f408ca140faeb4d0a"} Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.239199 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"1872b818e1d3a81b736e96d4bd922c7034625bbc36fd508c17b18a89536cfa06"} Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.239210 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"d651cbe10c799422b70bdc2aee044648229d024c73c6a49de36d327460b9dc98"} Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.239219 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"dd930004fef53cf215bdf17ca368c239a29f6d509a2d1b362c6c22718e3301aa"} Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.286801 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:34 crc kubenswrapper[4817]: I0314 05:47:34.331533 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p27dn"] Mar 14 05:47:35 crc kubenswrapper[4817]: I0314 05:47:35.264067 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t5x6v" event={"ID":"4887eb15-670e-4460-9df4-f50ff914238e","Type":"ContainerStarted","Data":"8dc10b3d575435bd1fdff03c1153b04afda13a67f0fc0736852fe33d17f4b689"} Mar 14 05:47:35 crc kubenswrapper[4817]: I0314 05:47:35.265275 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:35 crc kubenswrapper[4817]: I0314 05:47:35.294306 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t5x6v" podStartSLOduration=7.231587173 podStartE2EDuration="15.294279696s" podCreationTimestamp="2026-03-14 05:47:20 +0000 UTC" firstStartedPulling="2026-03-14 05:47:21.311508452 +0000 UTC m=+895.349769238" lastFinishedPulling="2026-03-14 05:47:29.374201015 +0000 UTC m=+903.412461761" observedRunningTime="2026-03-14 05:47:35.293431212 +0000 UTC m=+909.331691978" watchObservedRunningTime="2026-03-14 05:47:35.294279696 +0000 UTC m=+909.332540442" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.127233 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.195846 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.271099 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p27dn" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="registry-server" containerID="cri-o://11b067419c7b76535ffa063df54afc06c6d203df4a4eb9ba85455f880ae9895f" gracePeriod=2 Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.741284 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-686t2"] Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.742438 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.747422 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.747493 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-bl842" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.749287 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.753100 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-686t2"] Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.880062 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpcjv\" (UniqueName: \"kubernetes.io/projected/e2e7b4b7-9377-4f51-92ff-8d8024a13484-kube-api-access-cpcjv\") pod \"openstack-operator-index-686t2\" (UID: \"e2e7b4b7-9377-4f51-92ff-8d8024a13484\") " pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:36 crc kubenswrapper[4817]: I0314 05:47:36.981255 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpcjv\" (UniqueName: \"kubernetes.io/projected/e2e7b4b7-9377-4f51-92ff-8d8024a13484-kube-api-access-cpcjv\") pod \"openstack-operator-index-686t2\" (UID: \"e2e7b4b7-9377-4f51-92ff-8d8024a13484\") " pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:37 crc kubenswrapper[4817]: I0314 05:47:37.003634 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpcjv\" (UniqueName: \"kubernetes.io/projected/e2e7b4b7-9377-4f51-92ff-8d8024a13484-kube-api-access-cpcjv\") pod \"openstack-operator-index-686t2\" (UID: \"e2e7b4b7-9377-4f51-92ff-8d8024a13484\") " pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:37 crc kubenswrapper[4817]: I0314 05:47:37.102217 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:37 crc kubenswrapper[4817]: I0314 05:47:37.545066 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-686t2"] Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.302780 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-686t2" event={"ID":"e2e7b4b7-9377-4f51-92ff-8d8024a13484","Type":"ContainerStarted","Data":"3d01576ec7d9808560cc1ef24aaff997b5fa48b0edf004fe696e937d56361a80"} Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.305095 4817 generic.go:334] "Generic (PLEG): container finished" podID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerID="11b067419c7b76535ffa063df54afc06c6d203df4a4eb9ba85455f880ae9895f" exitCode=0 Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.305138 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p27dn" event={"ID":"850d6739-2e4e-4bed-b43b-bc11dbf91b9a","Type":"ContainerDied","Data":"11b067419c7b76535ffa063df54afc06c6d203df4a4eb9ba85455f880ae9895f"} Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.555407 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.565943 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.565993 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.566028 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.566573 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"114f8ca4faca8cf630930433049d8d1045dc89bc450b7ac565ae6c778fa29990"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.566627 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://114f8ca4faca8cf630930433049d8d1045dc89bc450b7ac565ae6c778fa29990" gracePeriod=600 Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.631362 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-utilities\") pod \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.631432 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-catalog-content\") pod \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.631467 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csjqh\" (UniqueName: \"kubernetes.io/projected/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-kube-api-access-csjqh\") pod \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\" (UID: \"850d6739-2e4e-4bed-b43b-bc11dbf91b9a\") " Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.632688 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-utilities" (OuterVolumeSpecName: "utilities") pod "850d6739-2e4e-4bed-b43b-bc11dbf91b9a" (UID: "850d6739-2e4e-4bed-b43b-bc11dbf91b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.642432 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-kube-api-access-csjqh" (OuterVolumeSpecName: "kube-api-access-csjqh") pod "850d6739-2e4e-4bed-b43b-bc11dbf91b9a" (UID: "850d6739-2e4e-4bed-b43b-bc11dbf91b9a"). InnerVolumeSpecName "kube-api-access-csjqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.711341 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "850d6739-2e4e-4bed-b43b-bc11dbf91b9a" (UID: "850d6739-2e4e-4bed-b43b-bc11dbf91b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.737153 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.737206 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:47:38 crc kubenswrapper[4817]: I0314 05:47:38.737225 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csjqh\" (UniqueName: \"kubernetes.io/projected/850d6739-2e4e-4bed-b43b-bc11dbf91b9a-kube-api-access-csjqh\") on node \"crc\" DevicePath \"\"" Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.319035 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="114f8ca4faca8cf630930433049d8d1045dc89bc450b7ac565ae6c778fa29990" exitCode=0 Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.319312 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"114f8ca4faca8cf630930433049d8d1045dc89bc450b7ac565ae6c778fa29990"} Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.319556 4817 scope.go:117] "RemoveContainer" containerID="c7f1199d020b47f75e517c784668558faa68dcbb94d53dd88ba20907f7920ea4" Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.323079 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p27dn" Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.323100 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p27dn" event={"ID":"850d6739-2e4e-4bed-b43b-bc11dbf91b9a","Type":"ContainerDied","Data":"854fb62dc087dd551ee18ae13502a5ccfb4083c42c9dc8344079b6b40c12dc61"} Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.347765 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p27dn"] Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.353274 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p27dn"] Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.413957 4817 scope.go:117] "RemoveContainer" containerID="11b067419c7b76535ffa063df54afc06c6d203df4a4eb9ba85455f880ae9895f" Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.713344 4817 scope.go:117] "RemoveContainer" containerID="2860914e3df419fceee26f11875eb7027dadb5afd4566c83152735928baaab08" Mar 14 05:47:39 crc kubenswrapper[4817]: I0314 05:47:39.796820 4817 scope.go:117] "RemoveContainer" containerID="584d0102f9413d400f6d780cf4f2f08482c39327f25e8cc0d1c9c90cf8050e11" Mar 14 05:47:40 crc kubenswrapper[4817]: I0314 05:47:40.334009 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"410879e5dd288fd6afed9ea2c23e57c34cba5d0fba30b068075ef7767158e5fe"} Mar 14 05:47:40 crc kubenswrapper[4817]: I0314 05:47:40.748653 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" path="/var/lib/kubelet/pods/850d6739-2e4e-4bed-b43b-bc11dbf91b9a/volumes" Mar 14 05:47:40 crc kubenswrapper[4817]: I0314 05:47:40.975671 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-88cxz" Mar 14 05:47:41 crc kubenswrapper[4817]: I0314 05:47:41.424413 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-hqjc5" Mar 14 05:47:43 crc kubenswrapper[4817]: I0314 05:47:43.374586 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-686t2" event={"ID":"e2e7b4b7-9377-4f51-92ff-8d8024a13484","Type":"ContainerStarted","Data":"241c2395ec534c0cbb102948218151ceb153d137207c7572c8a606a32ea0cabb"} Mar 14 05:47:43 crc kubenswrapper[4817]: I0314 05:47:43.398285 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-686t2" podStartSLOduration=2.652720341 podStartE2EDuration="7.398252105s" podCreationTimestamp="2026-03-14 05:47:36 +0000 UTC" firstStartedPulling="2026-03-14 05:47:37.558548123 +0000 UTC m=+911.596808869" lastFinishedPulling="2026-03-14 05:47:42.304079887 +0000 UTC m=+916.342340633" observedRunningTime="2026-03-14 05:47:43.397333609 +0000 UTC m=+917.435594365" watchObservedRunningTime="2026-03-14 05:47:43.398252105 +0000 UTC m=+917.436512881" Mar 14 05:47:47 crc kubenswrapper[4817]: I0314 05:47:47.103175 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:47 crc kubenswrapper[4817]: I0314 05:47:47.103523 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:47 crc kubenswrapper[4817]: I0314 05:47:47.138867 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:47 crc kubenswrapper[4817]: I0314 05:47:47.425512 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-686t2" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.557950 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l"] Mar 14 05:47:49 crc kubenswrapper[4817]: E0314 05:47:49.558554 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="extract-content" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.558571 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="extract-content" Mar 14 05:47:49 crc kubenswrapper[4817]: E0314 05:47:49.558585 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="extract-utilities" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.558592 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="extract-utilities" Mar 14 05:47:49 crc kubenswrapper[4817]: E0314 05:47:49.558609 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="registry-server" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.558617 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="registry-server" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.558750 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d6739-2e4e-4bed-b43b-bc11dbf91b9a" containerName="registry-server" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.559813 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.562800 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ptgbr" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.569690 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l"] Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.619732 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cx5m\" (UniqueName: \"kubernetes.io/projected/aefb397f-8ba5-4680-9976-39dc26760fd7-kube-api-access-4cx5m\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.619818 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-bundle\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.619978 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-util\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.721682 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cx5m\" (UniqueName: \"kubernetes.io/projected/aefb397f-8ba5-4680-9976-39dc26760fd7-kube-api-access-4cx5m\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.721752 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-bundle\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.721790 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-util\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.722344 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-util\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.722652 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-bundle\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.754117 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cx5m\" (UniqueName: \"kubernetes.io/projected/aefb397f-8ba5-4680-9976-39dc26760fd7-kube-api-access-4cx5m\") pod \"5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:49 crc kubenswrapper[4817]: I0314 05:47:49.881608 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:50 crc kubenswrapper[4817]: I0314 05:47:50.221643 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l"] Mar 14 05:47:50 crc kubenswrapper[4817]: I0314 05:47:50.421342 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" event={"ID":"aefb397f-8ba5-4680-9976-39dc26760fd7","Type":"ContainerStarted","Data":"73e64604793cc5d9c6b2d23435a3b26f41527472fea00649116237f5d9c1c44b"} Mar 14 05:47:51 crc kubenswrapper[4817]: I0314 05:47:51.131326 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t5x6v" Mar 14 05:47:51 crc kubenswrapper[4817]: I0314 05:47:51.431711 4817 generic.go:334] "Generic (PLEG): container finished" podID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerID="c4524e756a187e02909d278b90bf416f7e3c8cc258dd03fb0479f0ed1d736bb8" exitCode=0 Mar 14 05:47:51 crc kubenswrapper[4817]: I0314 05:47:51.431780 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" event={"ID":"aefb397f-8ba5-4680-9976-39dc26760fd7","Type":"ContainerDied","Data":"c4524e756a187e02909d278b90bf416f7e3c8cc258dd03fb0479f0ed1d736bb8"} Mar 14 05:47:55 crc kubenswrapper[4817]: I0314 05:47:55.455081 4817 generic.go:334] "Generic (PLEG): container finished" podID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerID="2ab4d658f076719c6ced9b8f362f1337db68edc218400829211d9b4e247a24aa" exitCode=0 Mar 14 05:47:55 crc kubenswrapper[4817]: I0314 05:47:55.455187 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" event={"ID":"aefb397f-8ba5-4680-9976-39dc26760fd7","Type":"ContainerDied","Data":"2ab4d658f076719c6ced9b8f362f1337db68edc218400829211d9b4e247a24aa"} Mar 14 05:47:56 crc kubenswrapper[4817]: I0314 05:47:56.465318 4817 generic.go:334] "Generic (PLEG): container finished" podID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerID="b3f5f0fe685006feeac5323511a32326f4927af33ddd96b3c83ef4ff48b0f5c1" exitCode=0 Mar 14 05:47:56 crc kubenswrapper[4817]: I0314 05:47:56.465452 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" event={"ID":"aefb397f-8ba5-4680-9976-39dc26760fd7","Type":"ContainerDied","Data":"b3f5f0fe685006feeac5323511a32326f4927af33ddd96b3c83ef4ff48b0f5c1"} Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.774146 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.861446 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-util\") pod \"aefb397f-8ba5-4680-9976-39dc26760fd7\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.861814 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cx5m\" (UniqueName: \"kubernetes.io/projected/aefb397f-8ba5-4680-9976-39dc26760fd7-kube-api-access-4cx5m\") pod \"aefb397f-8ba5-4680-9976-39dc26760fd7\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.861851 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-bundle\") pod \"aefb397f-8ba5-4680-9976-39dc26760fd7\" (UID: \"aefb397f-8ba5-4680-9976-39dc26760fd7\") " Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.862490 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-bundle" (OuterVolumeSpecName: "bundle") pod "aefb397f-8ba5-4680-9976-39dc26760fd7" (UID: "aefb397f-8ba5-4680-9976-39dc26760fd7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.876270 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefb397f-8ba5-4680-9976-39dc26760fd7-kube-api-access-4cx5m" (OuterVolumeSpecName: "kube-api-access-4cx5m") pod "aefb397f-8ba5-4680-9976-39dc26760fd7" (UID: "aefb397f-8ba5-4680-9976-39dc26760fd7"). InnerVolumeSpecName "kube-api-access-4cx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.883249 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-util" (OuterVolumeSpecName: "util") pod "aefb397f-8ba5-4680-9976-39dc26760fd7" (UID: "aefb397f-8ba5-4680-9976-39dc26760fd7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.963370 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cx5m\" (UniqueName: \"kubernetes.io/projected/aefb397f-8ba5-4680-9976-39dc26760fd7-kube-api-access-4cx5m\") on node \"crc\" DevicePath \"\"" Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.963417 4817 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:47:57 crc kubenswrapper[4817]: I0314 05:47:57.963430 4817 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb397f-8ba5-4680-9976-39dc26760fd7-util\") on node \"crc\" DevicePath \"\"" Mar 14 05:47:58 crc kubenswrapper[4817]: I0314 05:47:58.485419 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" event={"ID":"aefb397f-8ba5-4680-9976-39dc26760fd7","Type":"ContainerDied","Data":"73e64604793cc5d9c6b2d23435a3b26f41527472fea00649116237f5d9c1c44b"} Mar 14 05:47:58 crc kubenswrapper[4817]: I0314 05:47:58.485472 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73e64604793cc5d9c6b2d23435a3b26f41527472fea00649116237f5d9c1c44b" Mar 14 05:47:58 crc kubenswrapper[4817]: I0314 05:47:58.485479 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.187497 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557788-l2jj8"] Mar 14 05:48:00 crc kubenswrapper[4817]: E0314 05:48:00.188366 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="extract" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.188385 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="extract" Mar 14 05:48:00 crc kubenswrapper[4817]: E0314 05:48:00.188398 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="util" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.188404 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="util" Mar 14 05:48:00 crc kubenswrapper[4817]: E0314 05:48:00.188414 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="pull" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.188419 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="pull" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.188530 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefb397f-8ba5-4680-9976-39dc26760fd7" containerName="extract" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.189070 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.195933 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.196269 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.196431 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.201533 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557788-l2jj8"] Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.408412 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29phg\" (UniqueName: \"kubernetes.io/projected/5eda5581-2c9a-4a68-8bd5-e595b74b941d-kube-api-access-29phg\") pod \"auto-csr-approver-29557788-l2jj8\" (UID: \"5eda5581-2c9a-4a68-8bd5-e595b74b941d\") " pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.737521 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29phg\" (UniqueName: \"kubernetes.io/projected/5eda5581-2c9a-4a68-8bd5-e595b74b941d-kube-api-access-29phg\") pod \"auto-csr-approver-29557788-l2jj8\" (UID: \"5eda5581-2c9a-4a68-8bd5-e595b74b941d\") " pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.811185 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29phg\" (UniqueName: \"kubernetes.io/projected/5eda5581-2c9a-4a68-8bd5-e595b74b941d-kube-api-access-29phg\") pod \"auto-csr-approver-29557788-l2jj8\" (UID: \"5eda5581-2c9a-4a68-8bd5-e595b74b941d\") " pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:00 crc kubenswrapper[4817]: I0314 05:48:00.813797 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:01 crc kubenswrapper[4817]: I0314 05:48:01.313985 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557788-l2jj8"] Mar 14 05:48:01 crc kubenswrapper[4817]: I0314 05:48:01.503005 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" event={"ID":"5eda5581-2c9a-4a68-8bd5-e595b74b941d","Type":"ContainerStarted","Data":"265cd61b7251ae585948d3ec2f29eed26d2f6afed5ae568f26246b68299f2d8d"} Mar 14 05:48:01 crc kubenswrapper[4817]: I0314 05:48:01.881699 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq"] Mar 14 05:48:01 crc kubenswrapper[4817]: I0314 05:48:01.882722 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:01 crc kubenswrapper[4817]: I0314 05:48:01.887454 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-dnqqx" Mar 14 05:48:01 crc kubenswrapper[4817]: I0314 05:48:01.921980 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq"] Mar 14 05:48:02 crc kubenswrapper[4817]: I0314 05:48:02.053519 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgjm\" (UniqueName: \"kubernetes.io/projected/32b598c5-f4cd-4c5d-9189-e8985b451ae2-kube-api-access-dtgjm\") pod \"openstack-operator-controller-init-5bf7c47ddb-srdqq\" (UID: \"32b598c5-f4cd-4c5d-9189-e8985b451ae2\") " pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:02 crc kubenswrapper[4817]: I0314 05:48:02.154850 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtgjm\" (UniqueName: \"kubernetes.io/projected/32b598c5-f4cd-4c5d-9189-e8985b451ae2-kube-api-access-dtgjm\") pod \"openstack-operator-controller-init-5bf7c47ddb-srdqq\" (UID: \"32b598c5-f4cd-4c5d-9189-e8985b451ae2\") " pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:02 crc kubenswrapper[4817]: I0314 05:48:02.174881 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtgjm\" (UniqueName: \"kubernetes.io/projected/32b598c5-f4cd-4c5d-9189-e8985b451ae2-kube-api-access-dtgjm\") pod \"openstack-operator-controller-init-5bf7c47ddb-srdqq\" (UID: \"32b598c5-f4cd-4c5d-9189-e8985b451ae2\") " pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:02 crc kubenswrapper[4817]: I0314 05:48:02.203485 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:02 crc kubenswrapper[4817]: I0314 05:48:02.442389 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq"] Mar 14 05:48:02 crc kubenswrapper[4817]: W0314 05:48:02.446715 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b598c5_f4cd_4c5d_9189_e8985b451ae2.slice/crio-8797bc0c85dec2b521c27b8345ff97f4ad0bc04d3192eafa978bd6dfa2f15c8e WatchSource:0}: Error finding container 8797bc0c85dec2b521c27b8345ff97f4ad0bc04d3192eafa978bd6dfa2f15c8e: Status 404 returned error can't find the container with id 8797bc0c85dec2b521c27b8345ff97f4ad0bc04d3192eafa978bd6dfa2f15c8e Mar 14 05:48:02 crc kubenswrapper[4817]: I0314 05:48:02.517508 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" event={"ID":"32b598c5-f4cd-4c5d-9189-e8985b451ae2","Type":"ContainerStarted","Data":"8797bc0c85dec2b521c27b8345ff97f4ad0bc04d3192eafa978bd6dfa2f15c8e"} Mar 14 05:48:03 crc kubenswrapper[4817]: I0314 05:48:03.534295 4817 generic.go:334] "Generic (PLEG): container finished" podID="5eda5581-2c9a-4a68-8bd5-e595b74b941d" containerID="85a7e082379adfde08a802777abf483d0e78c51947ecb3fde7c6b94720c41653" exitCode=0 Mar 14 05:48:03 crc kubenswrapper[4817]: I0314 05:48:03.534617 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" event={"ID":"5eda5581-2c9a-4a68-8bd5-e595b74b941d","Type":"ContainerDied","Data":"85a7e082379adfde08a802777abf483d0e78c51947ecb3fde7c6b94720c41653"} Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.426602 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kz9qz"] Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.428359 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.456572 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz9qz"] Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.607238 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-catalog-content\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.607286 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22s9\" (UniqueName: \"kubernetes.io/projected/b5616aab-7628-48e8-8659-fd6edf67ee27-kube-api-access-p22s9\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.607346 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-utilities\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.709009 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-catalog-content\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.709083 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p22s9\" (UniqueName: \"kubernetes.io/projected/b5616aab-7628-48e8-8659-fd6edf67ee27-kube-api-access-p22s9\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.709156 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-utilities\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.709906 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-utilities\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.710042 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-catalog-content\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.731446 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22s9\" (UniqueName: \"kubernetes.io/projected/b5616aab-7628-48e8-8659-fd6edf67ee27-kube-api-access-p22s9\") pod \"certified-operators-kz9qz\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:05 crc kubenswrapper[4817]: I0314 05:48:05.756553 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.516869 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.560501 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" event={"ID":"5eda5581-2c9a-4a68-8bd5-e595b74b941d","Type":"ContainerDied","Data":"265cd61b7251ae585948d3ec2f29eed26d2f6afed5ae568f26246b68299f2d8d"} Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.560544 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265cd61b7251ae585948d3ec2f29eed26d2f6afed5ae568f26246b68299f2d8d" Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.560544 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557788-l2jj8" Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.622768 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29phg\" (UniqueName: \"kubernetes.io/projected/5eda5581-2c9a-4a68-8bd5-e595b74b941d-kube-api-access-29phg\") pod \"5eda5581-2c9a-4a68-8bd5-e595b74b941d\" (UID: \"5eda5581-2c9a-4a68-8bd5-e595b74b941d\") " Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.630003 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eda5581-2c9a-4a68-8bd5-e595b74b941d-kube-api-access-29phg" (OuterVolumeSpecName: "kube-api-access-29phg") pod "5eda5581-2c9a-4a68-8bd5-e595b74b941d" (UID: "5eda5581-2c9a-4a68-8bd5-e595b74b941d"). InnerVolumeSpecName "kube-api-access-29phg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:48:06 crc kubenswrapper[4817]: I0314 05:48:06.724012 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29phg\" (UniqueName: \"kubernetes.io/projected/5eda5581-2c9a-4a68-8bd5-e595b74b941d-kube-api-access-29phg\") on node \"crc\" DevicePath \"\"" Mar 14 05:48:07 crc kubenswrapper[4817]: I0314 05:48:07.581181 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557782-fn94w"] Mar 14 05:48:07 crc kubenswrapper[4817]: I0314 05:48:07.587284 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557782-fn94w"] Mar 14 05:48:08 crc kubenswrapper[4817]: I0314 05:48:08.609275 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" event={"ID":"32b598c5-f4cd-4c5d-9189-e8985b451ae2","Type":"ContainerStarted","Data":"5b1c6e62ca433a5bf092f0750e3f79b98c0a28b2b4355c0fbd90051adc09210f"} Mar 14 05:48:08 crc kubenswrapper[4817]: I0314 05:48:08.610992 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:08 crc kubenswrapper[4817]: I0314 05:48:08.638881 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kz9qz"] Mar 14 05:48:08 crc kubenswrapper[4817]: I0314 05:48:08.641139 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" podStartSLOduration=1.809317671 podStartE2EDuration="7.641118079s" podCreationTimestamp="2026-03-14 05:48:01 +0000 UTC" firstStartedPulling="2026-03-14 05:48:02.449427429 +0000 UTC m=+936.487688175" lastFinishedPulling="2026-03-14 05:48:08.281227837 +0000 UTC m=+942.319488583" observedRunningTime="2026-03-14 05:48:08.639532104 +0000 UTC m=+942.677792860" watchObservedRunningTime="2026-03-14 05:48:08.641118079 +0000 UTC m=+942.679378835" Mar 14 05:48:08 crc kubenswrapper[4817]: I0314 05:48:08.750531 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8db7b14-eae0-483f-b839-00ac2e6fc47d" path="/var/lib/kubelet/pods/a8db7b14-eae0-483f-b839-00ac2e6fc47d/volumes" Mar 14 05:48:09 crc kubenswrapper[4817]: I0314 05:48:09.617585 4817 generic.go:334] "Generic (PLEG): container finished" podID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerID="29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e" exitCode=0 Mar 14 05:48:09 crc kubenswrapper[4817]: I0314 05:48:09.617693 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qz" event={"ID":"b5616aab-7628-48e8-8659-fd6edf67ee27","Type":"ContainerDied","Data":"29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e"} Mar 14 05:48:09 crc kubenswrapper[4817]: I0314 05:48:09.618981 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qz" event={"ID":"b5616aab-7628-48e8-8659-fd6edf67ee27","Type":"ContainerStarted","Data":"2e27dd7edf41b0a029166b45bb211bbc37a0dea72e3233c7178141b0bca77c81"} Mar 14 05:48:11 crc kubenswrapper[4817]: I0314 05:48:11.631383 4817 generic.go:334] "Generic (PLEG): container finished" podID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerID="07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9" exitCode=0 Mar 14 05:48:11 crc kubenswrapper[4817]: I0314 05:48:11.631427 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qz" event={"ID":"b5616aab-7628-48e8-8659-fd6edf67ee27","Type":"ContainerDied","Data":"07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9"} Mar 14 05:48:12 crc kubenswrapper[4817]: I0314 05:48:12.641360 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qz" event={"ID":"b5616aab-7628-48e8-8659-fd6edf67ee27","Type":"ContainerStarted","Data":"7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a"} Mar 14 05:48:12 crc kubenswrapper[4817]: I0314 05:48:12.663520 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kz9qz" podStartSLOduration=5.052157955 podStartE2EDuration="7.663495783s" podCreationTimestamp="2026-03-14 05:48:05 +0000 UTC" firstStartedPulling="2026-03-14 05:48:09.61882972 +0000 UTC m=+943.657090466" lastFinishedPulling="2026-03-14 05:48:12.230167548 +0000 UTC m=+946.268428294" observedRunningTime="2026-03-14 05:48:12.661344092 +0000 UTC m=+946.699604848" watchObservedRunningTime="2026-03-14 05:48:12.663495783 +0000 UTC m=+946.701756529" Mar 14 05:48:15 crc kubenswrapper[4817]: I0314 05:48:15.757359 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:15 crc kubenswrapper[4817]: I0314 05:48:15.757421 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:15 crc kubenswrapper[4817]: I0314 05:48:15.797423 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:22 crc kubenswrapper[4817]: I0314 05:48:22.206495 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bf7c47ddb-srdqq" Mar 14 05:48:25 crc kubenswrapper[4817]: I0314 05:48:25.794665 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.220798 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz9qz"] Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.221378 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kz9qz" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="registry-server" containerID="cri-o://7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a" gracePeriod=2 Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.689140 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.702202 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-catalog-content\") pod \"b5616aab-7628-48e8-8659-fd6edf67ee27\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.702319 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p22s9\" (UniqueName: \"kubernetes.io/projected/b5616aab-7628-48e8-8659-fd6edf67ee27-kube-api-access-p22s9\") pod \"b5616aab-7628-48e8-8659-fd6edf67ee27\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.702341 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-utilities\") pod \"b5616aab-7628-48e8-8659-fd6edf67ee27\" (UID: \"b5616aab-7628-48e8-8659-fd6edf67ee27\") " Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.703607 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-utilities" (OuterVolumeSpecName: "utilities") pod "b5616aab-7628-48e8-8659-fd6edf67ee27" (UID: "b5616aab-7628-48e8-8659-fd6edf67ee27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.711109 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5616aab-7628-48e8-8659-fd6edf67ee27-kube-api-access-p22s9" (OuterVolumeSpecName: "kube-api-access-p22s9") pod "b5616aab-7628-48e8-8659-fd6edf67ee27" (UID: "b5616aab-7628-48e8-8659-fd6edf67ee27"). InnerVolumeSpecName "kube-api-access-p22s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.787936 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5616aab-7628-48e8-8659-fd6edf67ee27" (UID: "b5616aab-7628-48e8-8659-fd6edf67ee27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.803373 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p22s9\" (UniqueName: \"kubernetes.io/projected/b5616aab-7628-48e8-8659-fd6edf67ee27-kube-api-access-p22s9\") on node \"crc\" DevicePath \"\"" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.803409 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.803420 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5616aab-7628-48e8-8659-fd6edf67ee27-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.924852 4817 generic.go:334] "Generic (PLEG): container finished" podID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerID="7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a" exitCode=0 Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.924918 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qz" event={"ID":"b5616aab-7628-48e8-8659-fd6edf67ee27","Type":"ContainerDied","Data":"7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a"} Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.924938 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kz9qz" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.924986 4817 scope.go:117] "RemoveContainer" containerID="7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.924973 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kz9qz" event={"ID":"b5616aab-7628-48e8-8659-fd6edf67ee27","Type":"ContainerDied","Data":"2e27dd7edf41b0a029166b45bb211bbc37a0dea72e3233c7178141b0bca77c81"} Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.944012 4817 scope.go:117] "RemoveContainer" containerID="07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.955211 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kz9qz"] Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.959321 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kz9qz"] Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.961966 4817 scope.go:117] "RemoveContainer" containerID="29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.986019 4817 scope.go:117] "RemoveContainer" containerID="7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a" Mar 14 05:48:28 crc kubenswrapper[4817]: E0314 05:48:28.986472 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a\": container with ID starting with 7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a not found: ID does not exist" containerID="7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.986529 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a"} err="failed to get container status \"7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a\": rpc error: code = NotFound desc = could not find container \"7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a\": container with ID starting with 7a1ba26ffebdc366673762bdd7096f9ef1b4a75b1fe627760bb121440024d75a not found: ID does not exist" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.986564 4817 scope.go:117] "RemoveContainer" containerID="07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9" Mar 14 05:48:28 crc kubenswrapper[4817]: E0314 05:48:28.988139 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9\": container with ID starting with 07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9 not found: ID does not exist" containerID="07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.988167 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9"} err="failed to get container status \"07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9\": rpc error: code = NotFound desc = could not find container \"07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9\": container with ID starting with 07df8b4548b853299cd1ef2237b96289175012cc4b38e69b52da3579fc99b5c9 not found: ID does not exist" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.988183 4817 scope.go:117] "RemoveContainer" containerID="29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e" Mar 14 05:48:28 crc kubenswrapper[4817]: E0314 05:48:28.988775 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e\": container with ID starting with 29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e not found: ID does not exist" containerID="29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e" Mar 14 05:48:28 crc kubenswrapper[4817]: I0314 05:48:28.988817 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e"} err="failed to get container status \"29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e\": rpc error: code = NotFound desc = could not find container \"29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e\": container with ID starting with 29fe46e4cef3579d5c2c2a4b5c25e8d11abbef9177bf9fbc56ec428ce89a590e not found: ID does not exist" Mar 14 05:48:30 crc kubenswrapper[4817]: I0314 05:48:30.739628 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" path="/var/lib/kubelet/pods/b5616aab-7628-48e8-8659-fd6edf67ee27/volumes" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.840006 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lf5xj"] Mar 14 05:48:41 crc kubenswrapper[4817]: E0314 05:48:41.841212 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="registry-server" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.841230 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="registry-server" Mar 14 05:48:41 crc kubenswrapper[4817]: E0314 05:48:41.841251 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="extract-content" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.841262 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="extract-content" Mar 14 05:48:41 crc kubenswrapper[4817]: E0314 05:48:41.841330 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eda5581-2c9a-4a68-8bd5-e595b74b941d" containerName="oc" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.841339 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eda5581-2c9a-4a68-8bd5-e595b74b941d" containerName="oc" Mar 14 05:48:41 crc kubenswrapper[4817]: E0314 05:48:41.841399 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="extract-utilities" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.841408 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="extract-utilities" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.841723 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eda5581-2c9a-4a68-8bd5-e595b74b941d" containerName="oc" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.841745 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5616aab-7628-48e8-8659-fd6edf67ee27" containerName="registry-server" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.845674 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.878687 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf5xj"] Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.992319 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6q8v\" (UniqueName: \"kubernetes.io/projected/5807aefc-3eaf-4761-8888-da8fb0ff4fae-kube-api-access-r6q8v\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.992462 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-catalog-content\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:41 crc kubenswrapper[4817]: I0314 05:48:41.992574 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-utilities\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.075285 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.076658 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.080669 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2rf6j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.089469 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.091018 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.093435 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4bxg5" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.094116 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6q8v\" (UniqueName: \"kubernetes.io/projected/5807aefc-3eaf-4761-8888-da8fb0ff4fae-kube-api-access-r6q8v\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.094182 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-catalog-content\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.094220 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-utilities\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.095227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-utilities\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.095400 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-catalog-content\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.099616 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.108108 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.121186 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.122227 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.126066 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hh2rx" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.128421 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.129481 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.130687 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6q8v\" (UniqueName: \"kubernetes.io/projected/5807aefc-3eaf-4761-8888-da8fb0ff4fae-kube-api-access-r6q8v\") pod \"redhat-marketplace-lf5xj\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.133178 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-v47hc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.146916 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.154174 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.179516 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.195219 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbvf\" (UniqueName: \"kubernetes.io/projected/c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50-kube-api-access-jzbvf\") pod \"designate-operator-controller-manager-66d56f6ff4-c4p8n\" (UID: \"c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.195269 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrp54\" (UniqueName: \"kubernetes.io/projected/71a2aa8e-73f2-46c2-b8ad-2230259a3ede-kube-api-access-vrp54\") pod \"barbican-operator-controller-manager-d47688694-7cvhh\" (UID: \"71a2aa8e-73f2-46c2-b8ad-2230259a3ede\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.195302 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qnhg\" (UniqueName: \"kubernetes.io/projected/0d8de2cd-0cc8-40b5-a549-7632e38e11a9-kube-api-access-6qnhg\") pod \"glance-operator-controller-manager-5964f64c48-qrpsc\" (UID: \"0d8de2cd-0cc8-40b5-a549-7632e38e11a9\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.195328 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2v9h\" (UniqueName: \"kubernetes.io/projected/f1029de4-c046-47b8-820b-113369bf590a-kube-api-access-z2v9h\") pod \"cinder-operator-controller-manager-984cd4dcf-vsjwb\" (UID: \"f1029de4-c046-47b8-820b-113369bf590a\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.204085 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.205491 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.210227 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-t62nh" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.247572 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.260528 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.261859 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.265687 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.266178 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cp662" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.267821 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.270101 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c2m9h" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.272373 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.288412 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.294210 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.297229 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbvf\" (UniqueName: \"kubernetes.io/projected/c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50-kube-api-access-jzbvf\") pod \"designate-operator-controller-manager-66d56f6ff4-c4p8n\" (UID: \"c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.297287 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrp54\" (UniqueName: \"kubernetes.io/projected/71a2aa8e-73f2-46c2-b8ad-2230259a3ede-kube-api-access-vrp54\") pod \"barbican-operator-controller-manager-d47688694-7cvhh\" (UID: \"71a2aa8e-73f2-46c2-b8ad-2230259a3ede\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.297328 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qnhg\" (UniqueName: \"kubernetes.io/projected/0d8de2cd-0cc8-40b5-a549-7632e38e11a9-kube-api-access-6qnhg\") pod \"glance-operator-controller-manager-5964f64c48-qrpsc\" (UID: \"0d8de2cd-0cc8-40b5-a549-7632e38e11a9\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.297358 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2v9h\" (UniqueName: \"kubernetes.io/projected/f1029de4-c046-47b8-820b-113369bf590a-kube-api-access-z2v9h\") pod \"cinder-operator-controller-manager-984cd4dcf-vsjwb\" (UID: \"f1029de4-c046-47b8-820b-113369bf590a\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.297389 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6tm\" (UniqueName: \"kubernetes.io/projected/78bda46b-797b-4cc4-9cf5-14a2bc692947-kube-api-access-nk6tm\") pod \"heat-operator-controller-manager-77b6666d85-lssxv\" (UID: \"78bda46b-797b-4cc4-9cf5-14a2bc692947\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.301934 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.303024 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.315347 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-k7hg9" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.325867 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.326882 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.329385 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbvf\" (UniqueName: \"kubernetes.io/projected/c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50-kube-api-access-jzbvf\") pod \"designate-operator-controller-manager-66d56f6ff4-c4p8n\" (UID: \"c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.330159 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2v9h\" (UniqueName: \"kubernetes.io/projected/f1029de4-c046-47b8-820b-113369bf590a-kube-api-access-z2v9h\") pod \"cinder-operator-controller-manager-984cd4dcf-vsjwb\" (UID: \"f1029de4-c046-47b8-820b-113369bf590a\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.331654 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-624tn" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.332221 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrp54\" (UniqueName: \"kubernetes.io/projected/71a2aa8e-73f2-46c2-b8ad-2230259a3ede-kube-api-access-vrp54\") pod \"barbican-operator-controller-manager-d47688694-7cvhh\" (UID: \"71a2aa8e-73f2-46c2-b8ad-2230259a3ede\") " pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.337623 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.350532 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qnhg\" (UniqueName: \"kubernetes.io/projected/0d8de2cd-0cc8-40b5-a549-7632e38e11a9-kube-api-access-6qnhg\") pod \"glance-operator-controller-manager-5964f64c48-qrpsc\" (UID: \"0d8de2cd-0cc8-40b5-a549-7632e38e11a9\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.355067 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.373938 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.374648 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.377241 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jd96c" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.399073 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr7l\" (UniqueName: \"kubernetes.io/projected/2b1ddd08-cff6-4ec8-b701-77ad200ebd2f-kube-api-access-mgr7l\") pod \"horizon-operator-controller-manager-6d9d6b584d-9w9wj\" (UID: \"2b1ddd08-cff6-4ec8-b701-77ad200ebd2f\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.399129 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk6tm\" (UniqueName: \"kubernetes.io/projected/78bda46b-797b-4cc4-9cf5-14a2bc692947-kube-api-access-nk6tm\") pod \"heat-operator-controller-manager-77b6666d85-lssxv\" (UID: \"78bda46b-797b-4cc4-9cf5-14a2bc692947\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.399168 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.399202 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvkhl\" (UniqueName: \"kubernetes.io/projected/2d86f031-6e59-41cb-a7c1-cfe91c54630b-kube-api-access-tvkhl\") pod \"keystone-operator-controller-manager-684f77d66d-mxnmp\" (UID: \"2d86f031-6e59-41cb-a7c1-cfe91c54630b\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.399228 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2q9\" (UniqueName: \"kubernetes.io/projected/15bddcc1-f479-4390-96f1-f0fd2cd43578-kube-api-access-mv2q9\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.399249 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjrl\" (UniqueName: \"kubernetes.io/projected/895e1039-8354-4cb0-85d0-a0b2cc112db6-kube-api-access-ttjrl\") pod \"ironic-operator-controller-manager-5bc894d9b-blx9z\" (UID: \"895e1039-8354-4cb0-85d0-a0b2cc112db6\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.423042 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.423382 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk6tm\" (UniqueName: \"kubernetes.io/projected/78bda46b-797b-4cc4-9cf5-14a2bc692947-kube-api-access-nk6tm\") pod \"heat-operator-controller-manager-77b6666d85-lssxv\" (UID: \"78bda46b-797b-4cc4-9cf5-14a2bc692947\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.423925 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.425511 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.426181 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.431489 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-q9gzc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.435668 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.436505 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.442007 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.446993 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-v8vfs" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.473597 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.517586 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.558335 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.566844 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjrl\" (UniqueName: \"kubernetes.io/projected/895e1039-8354-4cb0-85d0-a0b2cc112db6-kube-api-access-ttjrl\") pod \"ironic-operator-controller-manager-5bc894d9b-blx9z\" (UID: \"895e1039-8354-4cb0-85d0-a0b2cc112db6\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567184 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgr7l\" (UniqueName: \"kubernetes.io/projected/2b1ddd08-cff6-4ec8-b701-77ad200ebd2f-kube-api-access-mgr7l\") pod \"horizon-operator-controller-manager-6d9d6b584d-9w9wj\" (UID: \"2b1ddd08-cff6-4ec8-b701-77ad200ebd2f\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567248 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n76hj\" (UniqueName: \"kubernetes.io/projected/6dc5b773-7e09-4d0c-b7fb-e73a398784dd-kube-api-access-n76hj\") pod \"manila-operator-controller-manager-6b74cf5dc5-4tws4\" (UID: \"6dc5b773-7e09-4d0c-b7fb-e73a398784dd\") " pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567395 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567491 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dm7f\" (UniqueName: \"kubernetes.io/projected/4e484bfe-93ce-49cb-b687-2fd92d4a8b60-kube-api-access-7dm7f\") pod \"neutron-operator-controller-manager-776c5696bf-gzl7j\" (UID: \"4e484bfe-93ce-49cb-b687-2fd92d4a8b60\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567532 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvkhl\" (UniqueName: \"kubernetes.io/projected/2d86f031-6e59-41cb-a7c1-cfe91c54630b-kube-api-access-tvkhl\") pod \"keystone-operator-controller-manager-684f77d66d-mxnmp\" (UID: \"2d86f031-6e59-41cb-a7c1-cfe91c54630b\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567570 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2d2\" (UniqueName: \"kubernetes.io/projected/3d2cc81d-8675-4b17-a429-c0a29be998d9-kube-api-access-wb2d2\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf\" (UID: \"3d2cc81d-8675-4b17-a429-c0a29be998d9\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.567630 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2q9\" (UniqueName: \"kubernetes.io/projected/15bddcc1-f479-4390-96f1-f0fd2cd43578-kube-api-access-mv2q9\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:42 crc kubenswrapper[4817]: E0314 05:48:42.568340 4817 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:42 crc kubenswrapper[4817]: E0314 05:48:42.568404 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert podName:15bddcc1-f479-4390-96f1-f0fd2cd43578 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:43.06838713 +0000 UTC m=+977.106647876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert") pod "infra-operator-controller-manager-54dc5b8f8d-sbktt" (UID: "15bddcc1-f479-4390-96f1-f0fd2cd43578") : secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.570674 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.593234 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.602777 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvkhl\" (UniqueName: \"kubernetes.io/projected/2d86f031-6e59-41cb-a7c1-cfe91c54630b-kube-api-access-tvkhl\") pod \"keystone-operator-controller-manager-684f77d66d-mxnmp\" (UID: \"2d86f031-6e59-41cb-a7c1-cfe91c54630b\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.604884 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.605413 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgr7l\" (UniqueName: \"kubernetes.io/projected/2b1ddd08-cff6-4ec8-b701-77ad200ebd2f-kube-api-access-mgr7l\") pod \"horizon-operator-controller-manager-6d9d6b584d-9w9wj\" (UID: \"2b1ddd08-cff6-4ec8-b701-77ad200ebd2f\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.606476 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.607250 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2q9\" (UniqueName: \"kubernetes.io/projected/15bddcc1-f479-4390-96f1-f0fd2cd43578-kube-api-access-mv2q9\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.611793 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjrl\" (UniqueName: \"kubernetes.io/projected/895e1039-8354-4cb0-85d0-a0b2cc112db6-kube-api-access-ttjrl\") pod \"ironic-operator-controller-manager-5bc894d9b-blx9z\" (UID: \"895e1039-8354-4cb0-85d0-a0b2cc112db6\") " pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.613340 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-shncp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.617311 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.618662 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.622766 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w9788" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.623885 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.637950 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.678803 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dm7f\" (UniqueName: \"kubernetes.io/projected/4e484bfe-93ce-49cb-b687-2fd92d4a8b60-kube-api-access-7dm7f\") pod \"neutron-operator-controller-manager-776c5696bf-gzl7j\" (UID: \"4e484bfe-93ce-49cb-b687-2fd92d4a8b60\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.678864 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2d2\" (UniqueName: \"kubernetes.io/projected/3d2cc81d-8675-4b17-a429-c0a29be998d9-kube-api-access-wb2d2\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf\" (UID: \"3d2cc81d-8675-4b17-a429-c0a29be998d9\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.678954 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n76hj\" (UniqueName: \"kubernetes.io/projected/6dc5b773-7e09-4d0c-b7fb-e73a398784dd-kube-api-access-n76hj\") pod \"manila-operator-controller-manager-6b74cf5dc5-4tws4\" (UID: \"6dc5b773-7e09-4d0c-b7fb-e73a398784dd\") " pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.694486 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.700475 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.705933 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.706772 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n76hj\" (UniqueName: \"kubernetes.io/projected/6dc5b773-7e09-4d0c-b7fb-e73a398784dd-kube-api-access-n76hj\") pod \"manila-operator-controller-manager-6b74cf5dc5-4tws4\" (UID: \"6dc5b773-7e09-4d0c-b7fb-e73a398784dd\") " pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.708442 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.709727 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.710881 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2d2\" (UniqueName: \"kubernetes.io/projected/3d2cc81d-8675-4b17-a429-c0a29be998d9-kube-api-access-wb2d2\") pod \"mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf\" (UID: \"3d2cc81d-8675-4b17-a429-c0a29be998d9\") " pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.716812 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.717425 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dvc5w" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.717828 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.726152 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dm7f\" (UniqueName: \"kubernetes.io/projected/4e484bfe-93ce-49cb-b687-2fd92d4a8b60-kube-api-access-7dm7f\") pod \"neutron-operator-controller-manager-776c5696bf-gzl7j\" (UID: \"4e484bfe-93ce-49cb-b687-2fd92d4a8b60\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.730529 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.731410 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.738477 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2r2fp" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.750593 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.763133 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.783498 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.789061 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.790340 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.792938 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-t6vlk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.796252 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4t2l\" (UniqueName: \"kubernetes.io/projected/7435e15d-0e12-4192-9725-59c501707754-kube-api-access-b4t2l\") pod \"octavia-operator-controller-manager-5f4f55cb5c-xvl5j\" (UID: \"7435e15d-0e12-4192-9725-59c501707754\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.796367 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.796410 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjrff\" (UniqueName: \"kubernetes.io/projected/12489bc5-14ae-42cb-9717-341e479b9e53-kube-api-access-vjrff\") pod \"ovn-operator-controller-manager-bbc5b68f9-75w5q\" (UID: \"12489bc5-14ae-42cb-9717-341e479b9e53\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.796452 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg428\" (UniqueName: \"kubernetes.io/projected/caf0784a-accc-4973-8daf-7239f91eacb3-kube-api-access-gg428\") pod \"nova-operator-controller-manager-7f84474648-kzrj9\" (UID: \"caf0784a-accc-4973-8daf-7239f91eacb3\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.796567 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztcjj\" (UniqueName: \"kubernetes.io/projected/2c529641-9582-4a57-b54d-f1f733f21a89-kube-api-access-ztcjj\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.799127 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.800275 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.811831 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8gb5k" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.828364 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.829222 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.838379 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-px8zs" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.852915 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897647 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4t2l\" (UniqueName: \"kubernetes.io/projected/7435e15d-0e12-4192-9725-59c501707754-kube-api-access-b4t2l\") pod \"octavia-operator-controller-manager-5f4f55cb5c-xvl5j\" (UID: \"7435e15d-0e12-4192-9725-59c501707754\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897706 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897740 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjrff\" (UniqueName: \"kubernetes.io/projected/12489bc5-14ae-42cb-9717-341e479b9e53-kube-api-access-vjrff\") pod \"ovn-operator-controller-manager-bbc5b68f9-75w5q\" (UID: \"12489bc5-14ae-42cb-9717-341e479b9e53\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897770 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg428\" (UniqueName: \"kubernetes.io/projected/caf0784a-accc-4973-8daf-7239f91eacb3-kube-api-access-gg428\") pod \"nova-operator-controller-manager-7f84474648-kzrj9\" (UID: \"caf0784a-accc-4973-8daf-7239f91eacb3\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897801 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj65b\" (UniqueName: \"kubernetes.io/projected/fa24c106-dfe2-4250-9b00-b063f21f0dcd-kube-api-access-wj65b\") pod \"telemetry-operator-controller-manager-6854b8b9d9-cr7p6\" (UID: \"fa24c106-dfe2-4250-9b00-b063f21f0dcd\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897829 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzhc\" (UniqueName: \"kubernetes.io/projected/5b65131d-d231-46ae-b5f9-95c9e4a0d69a-kube-api-access-rhzhc\") pod \"placement-operator-controller-manager-574d45c66c-hs4lc\" (UID: \"5b65131d-d231-46ae-b5f9-95c9e4a0d69a\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897878 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztcjj\" (UniqueName: \"kubernetes.io/projected/2c529641-9582-4a57-b54d-f1f733f21a89-kube-api-access-ztcjj\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.897928 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4k65\" (UniqueName: \"kubernetes.io/projected/be9a0979-c4ed-471b-9cc9-c3dd753f106d-kube-api-access-w4k65\") pod \"swift-operator-controller-manager-7f9cc5dd44-nb94v\" (UID: \"be9a0979-c4ed-471b-9cc9-c3dd753f106d\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:48:42 crc kubenswrapper[4817]: E0314 05:48:42.898419 4817 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:42 crc kubenswrapper[4817]: E0314 05:48:42.898476 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert podName:2c529641-9582-4a57-b54d-f1f733f21a89 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:43.398459072 +0000 UTC m=+977.436719818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" (UID: "2c529641-9582-4a57-b54d-f1f733f21a89") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.902785 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.906587 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.931263 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4t2l\" (UniqueName: \"kubernetes.io/projected/7435e15d-0e12-4192-9725-59c501707754-kube-api-access-b4t2l\") pod \"octavia-operator-controller-manager-5f4f55cb5c-xvl5j\" (UID: \"7435e15d-0e12-4192-9725-59c501707754\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.931993 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg428\" (UniqueName: \"kubernetes.io/projected/caf0784a-accc-4973-8daf-7239f91eacb3-kube-api-access-gg428\") pod \"nova-operator-controller-manager-7f84474648-kzrj9\" (UID: \"caf0784a-accc-4973-8daf-7239f91eacb3\") " pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.937954 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.938333 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6"] Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.941629 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjrff\" (UniqueName: \"kubernetes.io/projected/12489bc5-14ae-42cb-9717-341e479b9e53-kube-api-access-vjrff\") pod \"ovn-operator-controller-manager-bbc5b68f9-75w5q\" (UID: \"12489bc5-14ae-42cb-9717-341e479b9e53\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.944056 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztcjj\" (UniqueName: \"kubernetes.io/projected/2c529641-9582-4a57-b54d-f1f733f21a89-kube-api-access-ztcjj\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.946033 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:48:42 crc kubenswrapper[4817]: I0314 05:48:42.962992 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:42.999563 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj65b\" (UniqueName: \"kubernetes.io/projected/fa24c106-dfe2-4250-9b00-b063f21f0dcd-kube-api-access-wj65b\") pod \"telemetry-operator-controller-manager-6854b8b9d9-cr7p6\" (UID: \"fa24c106-dfe2-4250-9b00-b063f21f0dcd\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:42.999601 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhzhc\" (UniqueName: \"kubernetes.io/projected/5b65131d-d231-46ae-b5f9-95c9e4a0d69a-kube-api-access-rhzhc\") pod \"placement-operator-controller-manager-574d45c66c-hs4lc\" (UID: \"5b65131d-d231-46ae-b5f9-95c9e4a0d69a\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:42.999637 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4k65\" (UniqueName: \"kubernetes.io/projected/be9a0979-c4ed-471b-9cc9-c3dd753f106d-kube-api-access-w4k65\") pod \"swift-operator-controller-manager-7f9cc5dd44-nb94v\" (UID: \"be9a0979-c4ed-471b-9cc9-c3dd753f106d\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.001534 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.004533 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xc7zz" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.025398 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhzhc\" (UniqueName: \"kubernetes.io/projected/5b65131d-d231-46ae-b5f9-95c9e4a0d69a-kube-api-access-rhzhc\") pod \"placement-operator-controller-manager-574d45c66c-hs4lc\" (UID: \"5b65131d-d231-46ae-b5f9-95c9e4a0d69a\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.026366 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj65b\" (UniqueName: \"kubernetes.io/projected/fa24c106-dfe2-4250-9b00-b063f21f0dcd-kube-api-access-wj65b\") pod \"telemetry-operator-controller-manager-6854b8b9d9-cr7p6\" (UID: \"fa24c106-dfe2-4250-9b00-b063f21f0dcd\") " pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.038238 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4k65\" (UniqueName: \"kubernetes.io/projected/be9a0979-c4ed-471b-9cc9-c3dd753f106d-kube-api-access-w4k65\") pod \"swift-operator-controller-manager-7f9cc5dd44-nb94v\" (UID: \"be9a0979-c4ed-471b-9cc9-c3dd753f106d\") " pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.045508 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.095888 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.101044 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kml\" (UniqueName: \"kubernetes.io/projected/762056c0-1243-4e41-87ea-242c1d082965-kube-api-access-v8kml\") pod \"test-operator-controller-manager-5c5cb9c4d7-9j88f\" (UID: \"762056c0-1243-4e41-87ea-242c1d082965\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.101164 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.101318 4817 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.101373 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert podName:15bddcc1-f479-4390-96f1-f0fd2cd43578 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:44.101359199 +0000 UTC m=+978.139619935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert") pod "infra-operator-controller-manager-54dc5b8f8d-sbktt" (UID: "15bddcc1-f479-4390-96f1-f0fd2cd43578") : secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.118379 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.120791 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.128263 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6pqsv" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.137345 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.146829 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.157397 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.180968 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.181365 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.181905 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.201683 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.202412 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.203364 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.203509 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-n27c6" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.203944 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.204072 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcvdx\" (UniqueName: \"kubernetes.io/projected/d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c-kube-api-access-bcvdx\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xrd4b\" (UID: \"d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.204095 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8kml\" (UniqueName: \"kubernetes.io/projected/762056c0-1243-4e41-87ea-242c1d082965-kube-api-access-v8kml\") pod \"test-operator-controller-manager-5c5cb9c4d7-9j88f\" (UID: \"762056c0-1243-4e41-87ea-242c1d082965\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.204175 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrxc\" (UniqueName: \"kubernetes.io/projected/1351dc38-2b39-4e57-869d-1b430e900250-kube-api-access-ckrxc\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.228872 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.257906 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8kml\" (UniqueName: \"kubernetes.io/projected/762056c0-1243-4e41-87ea-242c1d082965-kube-api-access-v8kml\") pod \"test-operator-controller-manager-5c5cb9c4d7-9j88f\" (UID: \"762056c0-1243-4e41-87ea-242c1d082965\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.306423 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcvdx\" (UniqueName: \"kubernetes.io/projected/d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c-kube-api-access-bcvdx\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xrd4b\" (UID: \"d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.306502 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrxc\" (UniqueName: \"kubernetes.io/projected/1351dc38-2b39-4e57-869d-1b430e900250-kube-api-access-ckrxc\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.306540 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.306573 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.306761 4817 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.306813 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:43.806797728 +0000 UTC m=+977.845058474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.307289 4817 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.307380 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:43.807359064 +0000 UTC m=+977.845619810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "metrics-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.319029 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.320092 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.321336 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.323491 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ljscr" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.327214 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.337977 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf5xj"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.344545 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcvdx\" (UniqueName: \"kubernetes.io/projected/d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c-kube-api-access-bcvdx\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xrd4b\" (UID: \"d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.349550 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrxc\" (UniqueName: \"kubernetes.io/projected/1351dc38-2b39-4e57-869d-1b430e900250-kube-api-access-ckrxc\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.398046 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.406060 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.407721 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.407755 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvgwh\" (UniqueName: \"kubernetes.io/projected/9d193754-974c-4c1a-a142-28fc5f109935-kube-api-access-zvgwh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6dknb\" (UID: \"9d193754-974c-4c1a-a142-28fc5f109935\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.408977 4817 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.409027 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert podName:2c529641-9582-4a57-b54d-f1f733f21a89 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:44.409008153 +0000 UTC m=+978.447268899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" (UID: "2c529641-9582-4a57-b54d-f1f733f21a89") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.422267 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n"] Mar 14 05:48:43 crc kubenswrapper[4817]: W0314 05:48:43.437143 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4dbd02a_f20d_4b34_93f7_cd8f7eb0ca50.slice/crio-f665d595983dfcf9746f2760cdbb24f45eeb6af30cef1657ba856e23939c8be2 WatchSource:0}: Error finding container f665d595983dfcf9746f2760cdbb24f45eeb6af30cef1657ba856e23939c8be2: Status 404 returned error can't find the container with id f665d595983dfcf9746f2760cdbb24f45eeb6af30cef1657ba856e23939c8be2 Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.509960 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvgwh\" (UniqueName: \"kubernetes.io/projected/9d193754-974c-4c1a-a142-28fc5f109935-kube-api-access-zvgwh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6dknb\" (UID: \"9d193754-974c-4c1a-a142-28fc5f109935\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.521269 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.547849 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvgwh\" (UniqueName: \"kubernetes.io/projected/9d193754-974c-4c1a-a142-28fc5f109935-kube-api-access-zvgwh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6dknb\" (UID: \"9d193754-974c-4c1a-a142-28fc5f109935\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.605479 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.639087 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.651690 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.667434 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.815760 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.815938 4817 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.816006 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:44.815979571 +0000 UTC m=+978.854240367 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "webhook-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.816287 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.816696 4817 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: E0314 05:48:43.816728 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:44.816717712 +0000 UTC m=+978.854978458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "metrics-server-cert" not found Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.821849 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.835316 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp"] Mar 14 05:48:43 crc kubenswrapper[4817]: W0314 05:48:43.930187 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod895e1039_8354_4cb0_85d0_a0b2cc112db6.slice/crio-a0a9862c8e2b8b2f3895ac464c15ab98bca0a298076a095f3e9a0aef645e6c3e WatchSource:0}: Error finding container a0a9862c8e2b8b2f3895ac464c15ab98bca0a298076a095f3e9a0aef645e6c3e: Status 404 returned error can't find the container with id a0a9862c8e2b8b2f3895ac464c15ab98bca0a298076a095f3e9a0aef645e6c3e Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.980222 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j"] Mar 14 05:48:43 crc kubenswrapper[4817]: I0314 05:48:43.999687 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4"] Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.010966 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e484bfe_93ce_49cb_b687_2fd92d4a8b60.slice/crio-fbab08d9866f2ab874c47671a2416297a1b4ee40684b93d6081b57fe4b965233 WatchSource:0}: Error finding container fbab08d9866f2ab874c47671a2416297a1b4ee40684b93d6081b57fe4b965233: Status 404 returned error can't find the container with id fbab08d9866f2ab874c47671a2416297a1b4ee40684b93d6081b57fe4b965233 Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.121325 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.121545 4817 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.121854 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert podName:15bddcc1-f479-4390-96f1-f0fd2cd43578 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:46.121830575 +0000 UTC m=+980.160091371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert") pod "infra-operator-controller-manager-54dc5b8f8d-sbktt" (UID: "15bddcc1-f479-4390-96f1-f0fd2cd43578") : secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.149798 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" event={"ID":"f1029de4-c046-47b8-820b-113369bf590a","Type":"ContainerStarted","Data":"37b0b359e883f4c3c25980f042d2317c78c743b86a522a8fa3a35bb26ff4ff74"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.152458 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" event={"ID":"895e1039-8354-4cb0-85d0-a0b2cc112db6","Type":"ContainerStarted","Data":"a0a9862c8e2b8b2f3895ac464c15ab98bca0a298076a095f3e9a0aef645e6c3e"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.157975 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" event={"ID":"78bda46b-797b-4cc4-9cf5-14a2bc692947","Type":"ContainerStarted","Data":"59be62411dd2907d92b9d013a21f559533e30897b600e04ca87734a8077eeecf"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.159944 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" event={"ID":"6dc5b773-7e09-4d0c-b7fb-e73a398784dd","Type":"ContainerStarted","Data":"277cf4ed787d3d3ab5ec762ae00aeb15ca510acb15294a10ba0a076633e6c5c3"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.160959 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" event={"ID":"0d8de2cd-0cc8-40b5-a549-7632e38e11a9","Type":"ContainerStarted","Data":"e7710b44d3e9caf340384af1e9fd0aeb1ec3bbb35d2f0f1f6e4a65c159589e8b"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.164016 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" event={"ID":"4e484bfe-93ce-49cb-b687-2fd92d4a8b60","Type":"ContainerStarted","Data":"fbab08d9866f2ab874c47671a2416297a1b4ee40684b93d6081b57fe4b965233"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.176349 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf"] Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.190168 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" event={"ID":"2b1ddd08-cff6-4ec8-b701-77ad200ebd2f","Type":"ContainerStarted","Data":"33875407f825eb34d326ac422433080376225b3d137b8f0f9f49a82f92223985"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.198877 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" event={"ID":"c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50","Type":"ContainerStarted","Data":"f665d595983dfcf9746f2760cdbb24f45eeb6af30cef1657ba856e23939c8be2"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.203780 4817 generic.go:334] "Generic (PLEG): container finished" podID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerID="af37f05811b23ce2ff6f5c0345a823544bd538145559a5d91d8f9f8fce11fb8a" exitCode=0 Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.203838 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf5xj" event={"ID":"5807aefc-3eaf-4761-8888-da8fb0ff4fae","Type":"ContainerDied","Data":"af37f05811b23ce2ff6f5c0345a823544bd538145559a5d91d8f9f8fce11fb8a"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.203866 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf5xj" event={"ID":"5807aefc-3eaf-4761-8888-da8fb0ff4fae","Type":"ContainerStarted","Data":"4c2b54476c20da2ce9d31e8bbc02c79749359ed79ff6cb1f2ce317d41ee148d1"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.205691 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" event={"ID":"71a2aa8e-73f2-46c2-b8ad-2230259a3ede","Type":"ContainerStarted","Data":"acf3505a4b01da1952e65493e56d26a2190a66875ac104ceacde35a558e1978f"} Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.208175 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j"] Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.209716 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" event={"ID":"2d86f031-6e59-41cb-a7c1-cfe91c54630b","Type":"ContainerStarted","Data":"15590d75ccc894ad10b0f3922c433186ac176421871633bdf93fd41b241ebbe7"} Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.237605 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7435e15d_0e12_4192_9725_59c501707754.slice/crio-3cb9400ec4881c3fb128128464fde80532978ce8ecb17e97dadcbda3a1a205ca WatchSource:0}: Error finding container 3cb9400ec4881c3fb128128464fde80532978ce8ecb17e97dadcbda3a1a205ca: Status 404 returned error can't find the container with id 3cb9400ec4881c3fb128128464fde80532978ce8ecb17e97dadcbda3a1a205ca Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.327046 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q"] Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.337596 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9"] Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.343757 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc"] Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.363769 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjrff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-75w5q_openstack-operators(12489bc5-14ae-42cb-9717-341e479b9e53): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.366078 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" podUID="12489bc5-14ae-42cb-9717-341e479b9e53" Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.425678 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.425825 4817 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.426017 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert podName:2c529641-9582-4a57-b54d-f1f733f21a89 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:46.4259956 +0000 UTC m=+980.464256336 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" (UID: "2c529641-9582-4a57-b54d-f1f733f21a89") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.484406 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6"] Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.500135 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v"] Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.503074 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa24c106_dfe2_4250_9b00_b063f21f0dcd.slice/crio-73563bb229dd36f9a45984d093958765b55f641404b3520c5448b2784dd4e313 WatchSource:0}: Error finding container 73563bb229dd36f9a45984d093958765b55f641404b3520c5448b2784dd4e313: Status 404 returned error can't find the container with id 73563bb229dd36f9a45984d093958765b55f641404b3520c5448b2784dd4e313 Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.511360 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe9a0979_c4ed_471b_9cc9_c3dd753f106d.slice/crio-5e0d24e12c3e394b4487e99093f1fc5eee982527df7e3e4589d00833d2ca7875 WatchSource:0}: Error finding container 5e0d24e12c3e394b4487e99093f1fc5eee982527df7e3e4589d00833d2ca7875: Status 404 returned error can't find the container with id 5e0d24e12c3e394b4487e99093f1fc5eee982527df7e3e4589d00833d2ca7875 Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.516120 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72db77c98e7bca64d469b4dc316e9c8d329681f825d19ef8f333437fb1c6d3f5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4k65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7f9cc5dd44-nb94v_openstack-operators(be9a0979-c4ed-471b-9cc9-c3dd753f106d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.517768 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" podUID="be9a0979-c4ed-471b-9cc9-c3dd753f106d" Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.606884 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f"] Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.613986 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb"] Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.618437 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d193754_974c_4c1a_a142_28fc5f109935.slice/crio-d6c1c4bd3e5054c0b24f20eb7d475fe2701356a28ee8cbc218bc4ac99dc94ee6 WatchSource:0}: Error finding container d6c1c4bd3e5054c0b24f20eb7d475fe2701356a28ee8cbc218bc4ac99dc94ee6: Status 404 returned error can't find the container with id d6c1c4bd3e5054c0b24f20eb7d475fe2701356a28ee8cbc218bc4ac99dc94ee6 Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.619293 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b"] Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.620401 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0f348f1_96ec_46cd_bdd4_cccc5ad93f1c.slice/crio-4c4e8556dc252d3961a6bd595533db62d7f843f0e6797381394b3a6004897bba WatchSource:0}: Error finding container 4c4e8556dc252d3961a6bd595533db62d7f843f0e6797381394b3a6004897bba: Status 404 returned error can't find the container with id 4c4e8556dc252d3961a6bd595533db62d7f843f0e6797381394b3a6004897bba Mar 14 05:48:44 crc kubenswrapper[4817]: W0314 05:48:44.622252 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod762056c0_1243_4e41_87ea_242c1d082965.slice/crio-75b6192453aca7aff7ff22296e089124cf8152b8ef7c409292170af4476cfbf2 WatchSource:0}: Error finding container 75b6192453aca7aff7ff22296e089124cf8152b8ef7c409292170af4476cfbf2: Status 404 returned error can't find the container with id 75b6192453aca7aff7ff22296e089124cf8152b8ef7c409292170af4476cfbf2 Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.622554 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6dknb_openstack-operators(9d193754-974c-4c1a-a142-28fc5f109935): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.624186 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcvdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6c4d75f7f9-xrd4b_openstack-operators(d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.624956 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" podUID="9d193754-974c-4c1a-a142-28fc5f109935" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.625579 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" podUID="d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.625597 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8kml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-9j88f_openstack-operators(762056c0-1243-4e41-87ea-242c1d082965): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.627385 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" podUID="762056c0-1243-4e41-87ea-242c1d082965" Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.836479 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:44 crc kubenswrapper[4817]: I0314 05:48:44.836537 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.836667 4817 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.836721 4817 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.836742 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:46.836725424 +0000 UTC m=+980.874986170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "metrics-server-cert" not found Mar 14 05:48:44 crc kubenswrapper[4817]: E0314 05:48:44.836789 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:46.836776175 +0000 UTC m=+980.875037001 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "webhook-server-cert" not found Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.221494 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" event={"ID":"7435e15d-0e12-4192-9725-59c501707754","Type":"ContainerStarted","Data":"3cb9400ec4881c3fb128128464fde80532978ce8ecb17e97dadcbda3a1a205ca"} Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.226586 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" event={"ID":"fa24c106-dfe2-4250-9b00-b063f21f0dcd","Type":"ContainerStarted","Data":"73563bb229dd36f9a45984d093958765b55f641404b3520c5448b2784dd4e313"} Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.228593 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" event={"ID":"caf0784a-accc-4973-8daf-7239f91eacb3","Type":"ContainerStarted","Data":"f34a558338c35a3b2e227730fe9fe5c6cfdb59d4f5862df356ab69c3eeaa6f91"} Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.233696 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" event={"ID":"be9a0979-c4ed-471b-9cc9-c3dd753f106d","Type":"ContainerStarted","Data":"5e0d24e12c3e394b4487e99093f1fc5eee982527df7e3e4589d00833d2ca7875"} Mar 14 05:48:45 crc kubenswrapper[4817]: E0314 05:48:45.234999 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72db77c98e7bca64d469b4dc316e9c8d329681f825d19ef8f333437fb1c6d3f5\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" podUID="be9a0979-c4ed-471b-9cc9-c3dd753f106d" Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.240687 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" event={"ID":"3d2cc81d-8675-4b17-a429-c0a29be998d9","Type":"ContainerStarted","Data":"290974fcf93480cdaa891805fbfb935b96683b58b5048445adf9e4aa2f8f8862"} Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.257012 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" event={"ID":"762056c0-1243-4e41-87ea-242c1d082965","Type":"ContainerStarted","Data":"75b6192453aca7aff7ff22296e089124cf8152b8ef7c409292170af4476cfbf2"} Mar 14 05:48:45 crc kubenswrapper[4817]: E0314 05:48:45.258507 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" podUID="762056c0-1243-4e41-87ea-242c1d082965" Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.263409 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" event={"ID":"5b65131d-d231-46ae-b5f9-95c9e4a0d69a","Type":"ContainerStarted","Data":"f3a89ce083641d9cc86cf7d352725e3bfa80407a0e270ddc3bdd25f2da150af0"} Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.264430 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" event={"ID":"9d193754-974c-4c1a-a142-28fc5f109935","Type":"ContainerStarted","Data":"d6c1c4bd3e5054c0b24f20eb7d475fe2701356a28ee8cbc218bc4ac99dc94ee6"} Mar 14 05:48:45 crc kubenswrapper[4817]: E0314 05:48:45.265682 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" podUID="9d193754-974c-4c1a-a142-28fc5f109935" Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.266390 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" event={"ID":"12489bc5-14ae-42cb-9717-341e479b9e53","Type":"ContainerStarted","Data":"cb085714964b9516db0bd2361938a1e095b1de793190855edb81f593ee97e80a"} Mar 14 05:48:45 crc kubenswrapper[4817]: E0314 05:48:45.267376 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" podUID="12489bc5-14ae-42cb-9717-341e479b9e53" Mar 14 05:48:45 crc kubenswrapper[4817]: I0314 05:48:45.273162 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" event={"ID":"d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c","Type":"ContainerStarted","Data":"4c4e8556dc252d3961a6bd595533db62d7f843f0e6797381394b3a6004897bba"} Mar 14 05:48:45 crc kubenswrapper[4817]: E0314 05:48:45.279435 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" podUID="d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c" Mar 14 05:48:46 crc kubenswrapper[4817]: I0314 05:48:46.166582 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.166793 4817 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.167024 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert podName:15bddcc1-f479-4390-96f1-f0fd2cd43578 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:50.167008185 +0000 UTC m=+984.205268931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert") pod "infra-operator-controller-manager-54dc5b8f8d-sbktt" (UID: "15bddcc1-f479-4390-96f1-f0fd2cd43578") : secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: I0314 05:48:46.298054 4817 generic.go:334] "Generic (PLEG): container finished" podID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerID="88473530c05c1b57e848574bc93692d59d67ff066b3db42e23b72b3a3b35e34c" exitCode=0 Mar 14 05:48:46 crc kubenswrapper[4817]: I0314 05:48:46.298097 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf5xj" event={"ID":"5807aefc-3eaf-4761-8888-da8fb0ff4fae","Type":"ContainerDied","Data":"88473530c05c1b57e848574bc93692d59d67ff066b3db42e23b72b3a3b35e34c"} Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.300511 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72db77c98e7bca64d469b4dc316e9c8d329681f825d19ef8f333437fb1c6d3f5\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" podUID="be9a0979-c4ed-471b-9cc9-c3dd753f106d" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.300535 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" podUID="762056c0-1243-4e41-87ea-242c1d082965" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.300571 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" podUID="12489bc5-14ae-42cb-9717-341e479b9e53" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.302221 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" podUID="d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.302249 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" podUID="9d193754-974c-4c1a-a142-28fc5f109935" Mar 14 05:48:46 crc kubenswrapper[4817]: I0314 05:48:46.472748 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.472945 4817 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.473036 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert podName:2c529641-9582-4a57-b54d-f1f733f21a89 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:50.473017733 +0000 UTC m=+984.511278479 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" (UID: "2c529641-9582-4a57-b54d-f1f733f21a89") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: I0314 05:48:46.892824 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:46 crc kubenswrapper[4817]: I0314 05:48:46.892937 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.893296 4817 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.893361 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:50.89334265 +0000 UTC m=+984.931603396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "webhook-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.893785 4817 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 05:48:46 crc kubenswrapper[4817]: E0314 05:48:46.893822 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:50.893813053 +0000 UTC m=+984.932073799 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "metrics-server-cert" not found Mar 14 05:48:50 crc kubenswrapper[4817]: I0314 05:48:50.244738 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:50 crc kubenswrapper[4817]: E0314 05:48:50.244943 4817 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:50 crc kubenswrapper[4817]: E0314 05:48:50.245196 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert podName:15bddcc1-f479-4390-96f1-f0fd2cd43578 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:58.24517723 +0000 UTC m=+992.283437976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert") pod "infra-operator-controller-manager-54dc5b8f8d-sbktt" (UID: "15bddcc1-f479-4390-96f1-f0fd2cd43578") : secret "infra-operator-webhook-server-cert" not found Mar 14 05:48:50 crc kubenswrapper[4817]: I0314 05:48:50.549429 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:50 crc kubenswrapper[4817]: E0314 05:48:50.549597 4817 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:50 crc kubenswrapper[4817]: E0314 05:48:50.549687 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert podName:2c529641-9582-4a57-b54d-f1f733f21a89 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:58.549668785 +0000 UTC m=+992.587929531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" (UID: "2c529641-9582-4a57-b54d-f1f733f21a89") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 05:48:51 crc kubenswrapper[4817]: I0314 05:48:51.505369 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:51 crc kubenswrapper[4817]: I0314 05:48:51.505808 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:51 crc kubenswrapper[4817]: E0314 05:48:51.506544 4817 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 05:48:51 crc kubenswrapper[4817]: E0314 05:48:51.506631 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:59.506606945 +0000 UTC m=+993.544867691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "metrics-server-cert" not found Mar 14 05:48:51 crc kubenswrapper[4817]: E0314 05:48:51.507256 4817 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 05:48:51 crc kubenswrapper[4817]: E0314 05:48:51.507519 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs podName:1351dc38-2b39-4e57-869d-1b430e900250 nodeName:}" failed. No retries permitted until 2026-03-14 05:48:59.50749888 +0000 UTC m=+993.545759626 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs") pod "openstack-operator-controller-manager-5bb879dbb8-vlsfx" (UID: "1351dc38-2b39-4e57-869d-1b430e900250") : secret "webhook-server-cert" not found Mar 14 05:48:54 crc kubenswrapper[4817]: I0314 05:48:54.785626 4817 scope.go:117] "RemoveContainer" containerID="9067522da97e5affd5a1eb706f5b73f0e8ce8293d7c314df4a432969b939d4ab" Mar 14 05:48:58 crc kubenswrapper[4817]: I0314 05:48:58.271967 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:58 crc kubenswrapper[4817]: I0314 05:48:58.280993 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15bddcc1-f479-4390-96f1-f0fd2cd43578-cert\") pod \"infra-operator-controller-manager-54dc5b8f8d-sbktt\" (UID: \"15bddcc1-f479-4390-96f1-f0fd2cd43578\") " pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:58 crc kubenswrapper[4817]: I0314 05:48:58.572698 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:48:58 crc kubenswrapper[4817]: I0314 05:48:58.575762 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:58 crc kubenswrapper[4817]: I0314 05:48:58.581148 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c529641-9582-4a57-b54d-f1f733f21a89-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7v78tk\" (UID: \"2c529641-9582-4a57-b54d-f1f733f21a89\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:58 crc kubenswrapper[4817]: I0314 05:48:58.659225 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:48:59 crc kubenswrapper[4817]: E0314 05:48:59.439110 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703" Mar 14 05:48:59 crc kubenswrapper[4817]: E0314 05:48:59.439621 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttjrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bc894d9b-blx9z_openstack-operators(895e1039-8354-4cb0-85d0-a0b2cc112db6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:48:59 crc kubenswrapper[4817]: E0314 05:48:59.442087 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" podUID="895e1039-8354-4cb0-85d0-a0b2cc112db6" Mar 14 05:48:59 crc kubenswrapper[4817]: I0314 05:48:59.593436 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:59 crc kubenswrapper[4817]: I0314 05:48:59.593519 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:59 crc kubenswrapper[4817]: I0314 05:48:59.601154 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-metrics-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:59 crc kubenswrapper[4817]: I0314 05:48:59.601394 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1351dc38-2b39-4e57-869d-1b430e900250-webhook-certs\") pod \"openstack-operator-controller-manager-5bb879dbb8-vlsfx\" (UID: \"1351dc38-2b39-4e57-869d-1b430e900250\") " pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:59 crc kubenswrapper[4817]: I0314 05:48:59.819400 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:48:59 crc kubenswrapper[4817]: E0314 05:48:59.949605 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:af6065309235d342f15ba68d4bec51117e3a21cc630b5b72ba04aca2ce0d3703\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" podUID="895e1039-8354-4cb0-85d0-a0b2cc112db6" Mar 14 05:49:00 crc kubenswrapper[4817]: E0314 05:49:00.101461 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:a3bc074ddd9a26d3a8609e5dbdfa85a78449ba1c9b5542bff9949219d6760e60" Mar 14 05:49:00 crc kubenswrapper[4817]: E0314 05:49:00.101628 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:a3bc074ddd9a26d3a8609e5dbdfa85a78449ba1c9b5542bff9949219d6760e60,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qnhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5964f64c48-qrpsc_openstack-operators(0d8de2cd-0cc8-40b5-a549-7632e38e11a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:00 crc kubenswrapper[4817]: E0314 05:49:00.103373 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" podUID="0d8de2cd-0cc8-40b5-a549-7632e38e11a9" Mar 14 05:49:00 crc kubenswrapper[4817]: E0314 05:49:00.953297 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:a3bc074ddd9a26d3a8609e5dbdfa85a78449ba1c9b5542bff9949219d6760e60\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" podUID="0d8de2cd-0cc8-40b5-a549-7632e38e11a9" Mar 14 05:49:07 crc kubenswrapper[4817]: E0314 05:49:07.608190 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978" Mar 14 05:49:07 crc kubenswrapper[4817]: E0314 05:49:07.608917 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhzhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-hs4lc_openstack-operators(5b65131d-d231-46ae-b5f9-95c9e4a0d69a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:07 crc kubenswrapper[4817]: E0314 05:49:07.610141 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" podUID="5b65131d-d231-46ae-b5f9-95c9e4a0d69a" Mar 14 05:49:07 crc kubenswrapper[4817]: E0314 05:49:07.969506 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a26d062af19b3bc6dc6633171f1eff8eec33e8e925465d4968a0b9a36012a7e7" Mar 14 05:49:07 crc kubenswrapper[4817]: E0314 05:49:07.969710 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a26d062af19b3bc6dc6633171f1eff8eec33e8e925465d4968a0b9a36012a7e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wb2d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf_openstack-operators(3d2cc81d-8675-4b17-a429-c0a29be998d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:07 crc kubenswrapper[4817]: E0314 05:49:07.970955 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" podUID="3d2cc81d-8675-4b17-a429-c0a29be998d9" Mar 14 05:49:08 crc kubenswrapper[4817]: E0314 05:49:08.033007 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" podUID="5b65131d-d231-46ae-b5f9-95c9e4a0d69a" Mar 14 05:49:08 crc kubenswrapper[4817]: E0314 05:49:08.034357 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a26d062af19b3bc6dc6633171f1eff8eec33e8e925465d4968a0b9a36012a7e7\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" podUID="3d2cc81d-8675-4b17-a429-c0a29be998d9" Mar 14 05:49:08 crc kubenswrapper[4817]: E0314 05:49:08.709697 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 14 05:49:08 crc kubenswrapper[4817]: E0314 05:49:08.709912 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dm7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-gzl7j_openstack-operators(4e484bfe-93ce-49cb-b687-2fd92d4a8b60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:08 crc kubenswrapper[4817]: E0314 05:49:08.711067 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" podUID="4e484bfe-93ce-49cb-b687-2fd92d4a8b60" Mar 14 05:49:09 crc kubenswrapper[4817]: E0314 05:49:09.040349 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" podUID="4e484bfe-93ce-49cb-b687-2fd92d4a8b60" Mar 14 05:49:09 crc kubenswrapper[4817]: E0314 05:49:09.737387 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7" Mar 14 05:49:09 crc kubenswrapper[4817]: E0314 05:49:09.738009 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzbvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66d56f6ff4-c4p8n_openstack-operators(c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:09 crc kubenswrapper[4817]: E0314 05:49:09.740091 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" podUID="c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50" Mar 14 05:49:10 crc kubenswrapper[4817]: E0314 05:49:10.045357 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" podUID="c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50" Mar 14 05:49:11 crc kubenswrapper[4817]: E0314 05:49:11.231044 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.173:5001/openstack-k8s-operators/manila-operator:35899ff901cf143b24816c8ae1f0386aaf4d7cdb" Mar 14 05:49:11 crc kubenswrapper[4817]: E0314 05:49:11.231105 4817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.173:5001/openstack-k8s-operators/manila-operator:35899ff901cf143b24816c8ae1f0386aaf4d7cdb" Mar 14 05:49:11 crc kubenswrapper[4817]: E0314 05:49:11.231253 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.173:5001/openstack-k8s-operators/manila-operator:35899ff901cf143b24816c8ae1f0386aaf4d7cdb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n76hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6b74cf5dc5-4tws4_openstack-operators(6dc5b773-7e09-4d0c-b7fb-e73a398784dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:11 crc kubenswrapper[4817]: E0314 05:49:11.232379 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" podUID="6dc5b773-7e09-4d0c-b7fb-e73a398784dd" Mar 14 05:49:12 crc kubenswrapper[4817]: E0314 05:49:12.498481 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.173:5001/openstack-k8s-operators/manila-operator:35899ff901cf143b24816c8ae1f0386aaf4d7cdb\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" podUID="6dc5b773-7e09-4d0c-b7fb-e73a398784dd" Mar 14 05:49:12 crc kubenswrapper[4817]: E0314 05:49:12.664630 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:3a0fc90da4caf7412ae01e21542b53a10fe7a2732a705b0ae83f926d72c7332a" Mar 14 05:49:12 crc kubenswrapper[4817]: E0314 05:49:12.664934 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3a0fc90da4caf7412ae01e21542b53a10fe7a2732a705b0ae83f926d72c7332a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wj65b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6854b8b9d9-cr7p6_openstack-operators(fa24c106-dfe2-4250-9b00-b063f21f0dcd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:12 crc kubenswrapper[4817]: E0314 05:49:12.666166 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" podUID="fa24c106-dfe2-4250-9b00-b063f21f0dcd" Mar 14 05:49:13 crc kubenswrapper[4817]: E0314 05:49:13.500970 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3a0fc90da4caf7412ae01e21542b53a10fe7a2732a705b0ae83f926d72c7332a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" podUID="fa24c106-dfe2-4250-9b00-b063f21f0dcd" Mar 14 05:49:13 crc kubenswrapper[4817]: E0314 05:49:13.542921 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff" Mar 14 05:49:13 crc kubenswrapper[4817]: E0314 05:49:13.543127 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gg428,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f84474648-kzrj9_openstack-operators(caf0784a-accc-4973-8daf-7239f91eacb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:13 crc kubenswrapper[4817]: E0314 05:49:13.544386 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" podUID="caf0784a-accc-4973-8daf-7239f91eacb3" Mar 14 05:49:14 crc kubenswrapper[4817]: E0314 05:49:14.244292 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:bbe772fa051f782c9dcc3c34ce43495e1116aa9089a760c10068790baa9b25ff\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" podUID="caf0784a-accc-4973-8daf-7239f91eacb3" Mar 14 05:49:14 crc kubenswrapper[4817]: E0314 05:49:14.310943 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 14 05:49:14 crc kubenswrapper[4817]: E0314 05:49:14.311197 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tvkhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-mxnmp_openstack-operators(2d86f031-6e59-41cb-a7c1-cfe91c54630b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:14 crc kubenswrapper[4817]: E0314 05:49:14.312478 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" podUID="2d86f031-6e59-41cb-a7c1-cfe91c54630b" Mar 14 05:49:15 crc kubenswrapper[4817]: E0314 05:49:15.772867 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" podUID="2d86f031-6e59-41cb-a7c1-cfe91c54630b" Mar 14 05:49:19 crc kubenswrapper[4817]: E0314 05:49:19.288922 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 14 05:49:19 crc kubenswrapper[4817]: E0314 05:49:19.289690 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvgwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6dknb_openstack-operators(9d193754-974c-4c1a-a142-28fc5f109935): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:49:19 crc kubenswrapper[4817]: E0314 05:49:19.290920 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" podUID="9d193754-974c-4c1a-a142-28fc5f109935" Mar 14 05:49:19 crc kubenswrapper[4817]: I0314 05:49:19.947902 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt"] Mar 14 05:49:19 crc kubenswrapper[4817]: W0314 05:49:19.957174 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15bddcc1_f479_4390_96f1_f0fd2cd43578.slice/crio-f71bce60b5860f7805687ed3fde47cd0bde759d73441a0e0dd2b83814083f8dc WatchSource:0}: Error finding container f71bce60b5860f7805687ed3fde47cd0bde759d73441a0e0dd2b83814083f8dc: Status 404 returned error can't find the container with id f71bce60b5860f7805687ed3fde47cd0bde759d73441a0e0dd2b83814083f8dc Mar 14 05:49:20 crc kubenswrapper[4817]: I0314 05:49:20.018361 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk"] Mar 14 05:49:20 crc kubenswrapper[4817]: I0314 05:49:20.067526 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx"] Mar 14 05:49:20 crc kubenswrapper[4817]: W0314 05:49:20.072779 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1351dc38_2b39_4e57_869d_1b430e900250.slice/crio-d870aa410f86a16bc82470d0f9a090ae619a9024aaeca72ca7d7e5681ca0f2b7 WatchSource:0}: Error finding container d870aa410f86a16bc82470d0f9a090ae619a9024aaeca72ca7d7e5681ca0f2b7: Status 404 returned error can't find the container with id d870aa410f86a16bc82470d0f9a090ae619a9024aaeca72ca7d7e5681ca0f2b7 Mar 14 05:49:20 crc kubenswrapper[4817]: I0314 05:49:20.124313 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" event={"ID":"2c529641-9582-4a57-b54d-f1f733f21a89","Type":"ContainerStarted","Data":"5125f59521611b602f06dd3a34f71305cf6fde64fb557939e23f12ee0ab3e682"} Mar 14 05:49:20 crc kubenswrapper[4817]: I0314 05:49:20.127876 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" event={"ID":"15bddcc1-f479-4390-96f1-f0fd2cd43578","Type":"ContainerStarted","Data":"f71bce60b5860f7805687ed3fde47cd0bde759d73441a0e0dd2b83814083f8dc"} Mar 14 05:49:20 crc kubenswrapper[4817]: I0314 05:49:20.133310 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" event={"ID":"f1029de4-c046-47b8-820b-113369bf590a","Type":"ContainerStarted","Data":"ebdf265e9af4492feb16c59bcb65aaa38a459eb089b81f982ff3d532b082a60f"} Mar 14 05:49:20 crc kubenswrapper[4817]: I0314 05:49:20.133374 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" event={"ID":"1351dc38-2b39-4e57-869d-1b430e900250","Type":"ContainerStarted","Data":"d870aa410f86a16bc82470d0f9a090ae619a9024aaeca72ca7d7e5681ca0f2b7"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.140185 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" event={"ID":"78bda46b-797b-4cc4-9cf5-14a2bc692947","Type":"ContainerStarted","Data":"fb484f93d70feb2d342262bc64c8ca51852317adcd5ad6a52e33584b298656bb"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.140971 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.142164 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" event={"ID":"762056c0-1243-4e41-87ea-242c1d082965","Type":"ContainerStarted","Data":"0392f2ef83e5ce7689d29274790cfe91829e6038af181032ae0e923d0beada74"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.142653 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.144828 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" event={"ID":"7435e15d-0e12-4192-9725-59c501707754","Type":"ContainerStarted","Data":"15d8bb3a798532d328025aa266a935350af406f19253bbe5a65b032824a3562a"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.144942 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.150118 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" event={"ID":"be9a0979-c4ed-471b-9cc9-c3dd753f106d","Type":"ContainerStarted","Data":"407667e8319c4c965c7a185ddfb20a06587e7497707c31f818ee450a61b5f472"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.150301 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.151273 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" event={"ID":"d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c","Type":"ContainerStarted","Data":"9401e99f36712c4119063413202c5d31529ee24bb61b9fb88f9c10380ab6d49a"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.151782 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.153018 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" event={"ID":"0d8de2cd-0cc8-40b5-a549-7632e38e11a9","Type":"ContainerStarted","Data":"1d2a423a1d19ec26f5da6e58001e1f0c707c4c7476c8280048f5f83b3f617345"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.153216 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.154370 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" event={"ID":"2b1ddd08-cff6-4ec8-b701-77ad200ebd2f","Type":"ContainerStarted","Data":"81c09c4a7f83db6b5e19c68ee3648ff0e0b09ddc8c06620187399d42e714b3fd"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.154436 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.155584 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" event={"ID":"895e1039-8354-4cb0-85d0-a0b2cc112db6","Type":"ContainerStarted","Data":"a38cbf321f66d3a6e0bc6a564e3725a81a380b8215974a1f5b242d5daef5bc76"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.155952 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.159183 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" event={"ID":"1351dc38-2b39-4e57-869d-1b430e900250","Type":"ContainerStarted","Data":"9b7a1a8e260acd494c1d4e8d04ff893eb534c4f35a80b6f4b3334aae2816ce98"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.159674 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.161035 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" event={"ID":"12489bc5-14ae-42cb-9717-341e479b9e53","Type":"ContainerStarted","Data":"1805fc87d05d9656f8ceafeab730360d48ab036954583535e0efcf39360d1717"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.161455 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.163058 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf5xj" event={"ID":"5807aefc-3eaf-4761-8888-da8fb0ff4fae","Type":"ContainerStarted","Data":"5330434760f25f2c94af78d317cc8fe39d8023fe385ca07a932c261bba094853"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.165059 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" event={"ID":"71a2aa8e-73f2-46c2-b8ad-2230259a3ede","Type":"ContainerStarted","Data":"083ad9c06b86d742b58af399fb312b887db1a467c5db9b1bf6d2e0285928f6ad"} Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.165084 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.165096 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.173968 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" podStartSLOduration=9.035717229 podStartE2EDuration="39.173954128s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.815118916 +0000 UTC m=+977.853379662" lastFinishedPulling="2026-03-14 05:49:13.953355815 +0000 UTC m=+1007.991616561" observedRunningTime="2026-03-14 05:49:21.165404655 +0000 UTC m=+1015.203665411" watchObservedRunningTime="2026-03-14 05:49:21.173954128 +0000 UTC m=+1015.212214874" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.191746 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" podStartSLOduration=4.12143743 podStartE2EDuration="39.191730754s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.515963367 +0000 UTC m=+978.554224123" lastFinishedPulling="2026-03-14 05:49:19.586256701 +0000 UTC m=+1013.624517447" observedRunningTime="2026-03-14 05:49:21.190136688 +0000 UTC m=+1015.228397424" watchObservedRunningTime="2026-03-14 05:49:21.191730754 +0000 UTC m=+1015.229991500" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.228675 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" podStartSLOduration=39.228658463 podStartE2EDuration="39.228658463s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:49:21.223873127 +0000 UTC m=+1015.262133883" watchObservedRunningTime="2026-03-14 05:49:21.228658463 +0000 UTC m=+1015.266919199" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.245452 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" podStartSLOduration=4.294661683 podStartE2EDuration="39.24543798s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.624049939 +0000 UTC m=+978.662310685" lastFinishedPulling="2026-03-14 05:49:19.574826236 +0000 UTC m=+1013.613086982" observedRunningTime="2026-03-14 05:49:21.239520862 +0000 UTC m=+1015.277781628" watchObservedRunningTime="2026-03-14 05:49:21.24543798 +0000 UTC m=+1015.283698726" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.265138 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" podStartSLOduration=8.484703659000001 podStartE2EDuration="39.2651203s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.274686946 +0000 UTC m=+977.312947692" lastFinishedPulling="2026-03-14 05:49:14.055103567 +0000 UTC m=+1008.093364333" observedRunningTime="2026-03-14 05:49:21.259973623 +0000 UTC m=+1015.298234379" watchObservedRunningTime="2026-03-14 05:49:21.2651203 +0000 UTC m=+1015.303381046" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.311960 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lf5xj" podStartSLOduration=4.972569069 podStartE2EDuration="40.311945611s" podCreationTimestamp="2026-03-14 05:48:41 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.205135653 +0000 UTC m=+978.243396399" lastFinishedPulling="2026-03-14 05:49:19.544512195 +0000 UTC m=+1013.582772941" observedRunningTime="2026-03-14 05:49:21.294349411 +0000 UTC m=+1015.332610157" watchObservedRunningTime="2026-03-14 05:49:21.311945611 +0000 UTC m=+1015.350206357" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.313647 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" podStartSLOduration=9.076959552 podStartE2EDuration="39.313638529s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.817766341 +0000 UTC m=+977.856027087" lastFinishedPulling="2026-03-14 05:49:14.054445308 +0000 UTC m=+1008.092706064" observedRunningTime="2026-03-14 05:49:21.311734645 +0000 UTC m=+1015.349995401" watchObservedRunningTime="2026-03-14 05:49:21.313638529 +0000 UTC m=+1015.351899275" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.330751 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" podStartSLOduration=4.373037171 podStartE2EDuration="39.330736135s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.625006176 +0000 UTC m=+978.663266922" lastFinishedPulling="2026-03-14 05:49:19.58270514 +0000 UTC m=+1013.620965886" observedRunningTime="2026-03-14 05:49:21.32495689 +0000 UTC m=+1015.363217636" watchObservedRunningTime="2026-03-14 05:49:21.330736135 +0000 UTC m=+1015.368996881" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.340588 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" podStartSLOduration=3.56136069 podStartE2EDuration="39.340568944s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.80187275 +0000 UTC m=+977.840133496" lastFinishedPulling="2026-03-14 05:49:19.581081004 +0000 UTC m=+1013.619341750" observedRunningTime="2026-03-14 05:49:21.338741122 +0000 UTC m=+1015.377001868" watchObservedRunningTime="2026-03-14 05:49:21.340568944 +0000 UTC m=+1015.378829690" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.365595 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" podStartSLOduration=8.781285948 podStartE2EDuration="39.365575255s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.369068188 +0000 UTC m=+977.407328934" lastFinishedPulling="2026-03-14 05:49:13.953357495 +0000 UTC m=+1007.991618241" observedRunningTime="2026-03-14 05:49:21.361739176 +0000 UTC m=+1015.399999932" watchObservedRunningTime="2026-03-14 05:49:21.365575255 +0000 UTC m=+1015.403836001" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.409625 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" podStartSLOduration=5.857250079 podStartE2EDuration="39.409609227s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.240775536 +0000 UTC m=+978.279036282" lastFinishedPulling="2026-03-14 05:49:17.793134684 +0000 UTC m=+1011.831395430" observedRunningTime="2026-03-14 05:49:21.391708148 +0000 UTC m=+1015.429968904" watchObservedRunningTime="2026-03-14 05:49:21.409609227 +0000 UTC m=+1015.447869973" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.429207 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" podStartSLOduration=4.20656545 podStartE2EDuration="39.429188613s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.363645808 +0000 UTC m=+978.401906554" lastFinishedPulling="2026-03-14 05:49:19.586268971 +0000 UTC m=+1013.624529717" observedRunningTime="2026-03-14 05:49:21.42660896 +0000 UTC m=+1015.464869706" watchObservedRunningTime="2026-03-14 05:49:21.429188613 +0000 UTC m=+1015.467449359" Mar 14 05:49:21 crc kubenswrapper[4817]: I0314 05:49:21.429402 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" podStartSLOduration=3.789661599 podStartE2EDuration="39.429396669s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.941337194 +0000 UTC m=+977.979597940" lastFinishedPulling="2026-03-14 05:49:19.581072264 +0000 UTC m=+1013.619333010" observedRunningTime="2026-03-14 05:49:21.412061816 +0000 UTC m=+1015.450322562" watchObservedRunningTime="2026-03-14 05:49:21.429396669 +0000 UTC m=+1015.467657415" Mar 14 05:49:22 crc kubenswrapper[4817]: I0314 05:49:22.181269 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:49:22 crc kubenswrapper[4817]: I0314 05:49:22.181621 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:49:23 crc kubenswrapper[4817]: I0314 05:49:23.180931 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" event={"ID":"4e484bfe-93ce-49cb-b687-2fd92d4a8b60","Type":"ContainerStarted","Data":"545fbb528e5f35dad9da013a7b2aa0e94287ec0c65a8aa8e84e9fb6c32123a63"} Mar 14 05:49:23 crc kubenswrapper[4817]: I0314 05:49:23.182127 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:49:23 crc kubenswrapper[4817]: I0314 05:49:23.183474 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" event={"ID":"3d2cc81d-8675-4b17-a429-c0a29be998d9","Type":"ContainerStarted","Data":"a0ff3f97d6a76583d65fc438b73c8c08116a2573ee959624f9f5511283ddb84c"} Mar 14 05:49:23 crc kubenswrapper[4817]: I0314 05:49:23.204723 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" podStartSLOduration=2.903518854 podStartE2EDuration="41.20470186s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.0175187 +0000 UTC m=+978.055779446" lastFinishedPulling="2026-03-14 05:49:22.318701706 +0000 UTC m=+1016.356962452" observedRunningTime="2026-03-14 05:49:23.195478818 +0000 UTC m=+1017.233739574" watchObservedRunningTime="2026-03-14 05:49:23.20470186 +0000 UTC m=+1017.242962606" Mar 14 05:49:23 crc kubenswrapper[4817]: I0314 05:49:23.215401 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" podStartSLOduration=3.115620791 podStartE2EDuration="41.215384973s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.21771409 +0000 UTC m=+978.255974836" lastFinishedPulling="2026-03-14 05:49:22.317478282 +0000 UTC m=+1016.355739018" observedRunningTime="2026-03-14 05:49:23.213020646 +0000 UTC m=+1017.251281412" watchObservedRunningTime="2026-03-14 05:49:23.215384973 +0000 UTC m=+1017.253645719" Mar 14 05:49:23 crc kubenswrapper[4817]: I0314 05:49:23.235287 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lf5xj" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="registry-server" probeResult="failure" output=< Mar 14 05:49:23 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:49:23 crc kubenswrapper[4817]: > Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.201637 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" event={"ID":"5b65131d-d231-46ae-b5f9-95c9e4a0d69a","Type":"ContainerStarted","Data":"ea6630a993ba20734e6439d09b293a33f953a03bccc741f3ac0759fcae5853d9"} Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.202138 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.204051 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" event={"ID":"2c529641-9582-4a57-b54d-f1f733f21a89","Type":"ContainerStarted","Data":"dad476a8cf936d25063e6795f3929c1ef83aeb4b2fd6c6a2c64b9b7f6647b537"} Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.204184 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.205616 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" event={"ID":"15bddcc1-f479-4390-96f1-f0fd2cd43578","Type":"ContainerStarted","Data":"423e3012c78733f92e76e97841746103b977ae91f5126435a93758fbf72bcdcd"} Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.205881 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.207009 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" event={"ID":"c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50","Type":"ContainerStarted","Data":"b502499d17436948b4f703a238d033095e163dac2e8da2aa13faa47017bfca09"} Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.207197 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.223738 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" podStartSLOduration=3.199918079 podStartE2EDuration="43.223724637s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.363163955 +0000 UTC m=+978.401424701" lastFinishedPulling="2026-03-14 05:49:24.386970513 +0000 UTC m=+1018.425231259" observedRunningTime="2026-03-14 05:49:25.222757099 +0000 UTC m=+1019.261017855" watchObservedRunningTime="2026-03-14 05:49:25.223724637 +0000 UTC m=+1019.261985383" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.235292 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" podStartSLOduration=2.085977646 podStartE2EDuration="43.235273975s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.489195613 +0000 UTC m=+977.527456359" lastFinishedPulling="2026-03-14 05:49:24.638491942 +0000 UTC m=+1018.676752688" observedRunningTime="2026-03-14 05:49:25.234060191 +0000 UTC m=+1019.272320947" watchObservedRunningTime="2026-03-14 05:49:25.235273975 +0000 UTC m=+1019.273534721" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.260763 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" podStartSLOduration=38.900800746 podStartE2EDuration="43.260744499s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:49:20.026256798 +0000 UTC m=+1014.064517544" lastFinishedPulling="2026-03-14 05:49:24.386200551 +0000 UTC m=+1018.424461297" observedRunningTime="2026-03-14 05:49:25.254980595 +0000 UTC m=+1019.293241341" watchObservedRunningTime="2026-03-14 05:49:25.260744499 +0000 UTC m=+1019.299005245" Mar 14 05:49:25 crc kubenswrapper[4817]: I0314 05:49:25.281641 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" podStartSLOduration=38.855629151 podStartE2EDuration="43.281625502s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:49:19.959746117 +0000 UTC m=+1013.998006853" lastFinishedPulling="2026-03-14 05:49:24.385742458 +0000 UTC m=+1018.424003204" observedRunningTime="2026-03-14 05:49:25.280673025 +0000 UTC m=+1019.318933771" watchObservedRunningTime="2026-03-14 05:49:25.281625502 +0000 UTC m=+1019.319886248" Mar 14 05:49:26 crc kubenswrapper[4817]: I0314 05:49:26.233219 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" event={"ID":"6dc5b773-7e09-4d0c-b7fb-e73a398784dd","Type":"ContainerStarted","Data":"f0d14f182e8897ff826d71cf31d71e3bf9927d21af86d5929890774f8d91865a"} Mar 14 05:49:26 crc kubenswrapper[4817]: I0314 05:49:26.234359 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:49:26 crc kubenswrapper[4817]: I0314 05:49:26.760442 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" podStartSLOduration=3.785595615 podStartE2EDuration="44.760415825s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.020659759 +0000 UTC m=+978.058920505" lastFinishedPulling="2026-03-14 05:49:24.995479969 +0000 UTC m=+1019.033740715" observedRunningTime="2026-03-14 05:49:26.259925649 +0000 UTC m=+1020.298186395" watchObservedRunningTime="2026-03-14 05:49:26.760415825 +0000 UTC m=+1020.798676571" Mar 14 05:49:27 crc kubenswrapper[4817]: I0314 05:49:27.243252 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" event={"ID":"caf0784a-accc-4973-8daf-7239f91eacb3","Type":"ContainerStarted","Data":"7f30d502e7bdaed05ebf6323d6c6f76e05086c25ac27ea9ec595aa52ca088c28"} Mar 14 05:49:27 crc kubenswrapper[4817]: I0314 05:49:27.244180 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:49:27 crc kubenswrapper[4817]: I0314 05:49:27.246239 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" event={"ID":"fa24c106-dfe2-4250-9b00-b063f21f0dcd","Type":"ContainerStarted","Data":"72317d97875e0ae32f0ec40586a7a718a10598300b25780f75bcf2da74dce135"} Mar 14 05:49:27 crc kubenswrapper[4817]: I0314 05:49:27.246729 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:49:27 crc kubenswrapper[4817]: I0314 05:49:27.269251 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" podStartSLOduration=2.885900362 podStartE2EDuration="45.269231127s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.362748893 +0000 UTC m=+978.401009639" lastFinishedPulling="2026-03-14 05:49:26.746079658 +0000 UTC m=+1020.784340404" observedRunningTime="2026-03-14 05:49:27.264796081 +0000 UTC m=+1021.303056827" watchObservedRunningTime="2026-03-14 05:49:27.269231127 +0000 UTC m=+1021.307491883" Mar 14 05:49:27 crc kubenswrapper[4817]: I0314 05:49:27.289522 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" podStartSLOduration=3.050097939 podStartE2EDuration="45.289504264s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.505662964 +0000 UTC m=+978.543923710" lastFinishedPulling="2026-03-14 05:49:26.745069289 +0000 UTC m=+1020.783330035" observedRunningTime="2026-03-14 05:49:27.284191903 +0000 UTC m=+1021.322452669" watchObservedRunningTime="2026-03-14 05:49:27.289504264 +0000 UTC m=+1021.327765010" Mar 14 05:49:28 crc kubenswrapper[4817]: I0314 05:49:28.254062 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" event={"ID":"2d86f031-6e59-41cb-a7c1-cfe91c54630b","Type":"ContainerStarted","Data":"c228ac516d18987e1a5139876dfa840e456dd90481cc1291fd34cfab56aa7f74"} Mar 14 05:49:28 crc kubenswrapper[4817]: I0314 05:49:28.254553 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:49:28 crc kubenswrapper[4817]: I0314 05:49:28.279487 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" podStartSLOduration=2.629569967 podStartE2EDuration="46.279468861s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:43.940613864 +0000 UTC m=+977.978874610" lastFinishedPulling="2026-03-14 05:49:27.590512758 +0000 UTC m=+1021.628773504" observedRunningTime="2026-03-14 05:49:28.276639431 +0000 UTC m=+1022.314900187" watchObservedRunningTime="2026-03-14 05:49:28.279468861 +0000 UTC m=+1022.317729597" Mar 14 05:49:29 crc kubenswrapper[4817]: I0314 05:49:29.830122 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5bb879dbb8-vlsfx" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.221386 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.264181 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.429620 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-d47688694-7cvhh" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.463168 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf5xj"] Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.504088 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-vsjwb" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.521203 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-c4p8n" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.560735 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qrpsc" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.596907 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-lssxv" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.626435 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-9w9wj" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.697344 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bc894d9b-blx9z" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.709510 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mxnmp" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.722082 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6b74cf5dc5-4tws4" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.752175 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.754309 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.909877 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-gzl7j" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.942160 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f84474648-kzrj9" Mar 14 05:49:32 crc kubenswrapper[4817]: I0314 05:49:32.951660 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-xvl5j" Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.098447 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-75w5q" Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.150240 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-hs4lc" Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.160685 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7f9cc5dd44-nb94v" Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.185823 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6854b8b9d9-cr7p6" Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.284293 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lf5xj" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="registry-server" containerID="cri-o://5330434760f25f2c94af78d317cc8fe39d8023fe385ca07a932c261bba094853" gracePeriod=2 Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.324205 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-9j88f" Mar 14 05:49:33 crc kubenswrapper[4817]: I0314 05:49:33.524757 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xrd4b" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.293232 4817 generic.go:334] "Generic (PLEG): container finished" podID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerID="5330434760f25f2c94af78d317cc8fe39d8023fe385ca07a932c261bba094853" exitCode=0 Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.293286 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf5xj" event={"ID":"5807aefc-3eaf-4761-8888-da8fb0ff4fae","Type":"ContainerDied","Data":"5330434760f25f2c94af78d317cc8fe39d8023fe385ca07a932c261bba094853"} Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.542785 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.690878 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-utilities\") pod \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.690994 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6q8v\" (UniqueName: \"kubernetes.io/projected/5807aefc-3eaf-4761-8888-da8fb0ff4fae-kube-api-access-r6q8v\") pod \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.691100 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-catalog-content\") pod \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\" (UID: \"5807aefc-3eaf-4761-8888-da8fb0ff4fae\") " Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.692200 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-utilities" (OuterVolumeSpecName: "utilities") pod "5807aefc-3eaf-4761-8888-da8fb0ff4fae" (UID: "5807aefc-3eaf-4761-8888-da8fb0ff4fae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.697397 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5807aefc-3eaf-4761-8888-da8fb0ff4fae-kube-api-access-r6q8v" (OuterVolumeSpecName: "kube-api-access-r6q8v") pod "5807aefc-3eaf-4761-8888-da8fb0ff4fae" (UID: "5807aefc-3eaf-4761-8888-da8fb0ff4fae"). InnerVolumeSpecName "kube-api-access-r6q8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.718227 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5807aefc-3eaf-4761-8888-da8fb0ff4fae" (UID: "5807aefc-3eaf-4761-8888-da8fb0ff4fae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:49:34 crc kubenswrapper[4817]: E0314 05:49:34.734139 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" podUID="9d193754-974c-4c1a-a142-28fc5f109935" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.793307 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6q8v\" (UniqueName: \"kubernetes.io/projected/5807aefc-3eaf-4761-8888-da8fb0ff4fae-kube-api-access-r6q8v\") on node \"crc\" DevicePath \"\"" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.793360 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:49:34 crc kubenswrapper[4817]: I0314 05:49:34.793380 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5807aefc-3eaf-4761-8888-da8fb0ff4fae-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.302905 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf5xj" event={"ID":"5807aefc-3eaf-4761-8888-da8fb0ff4fae","Type":"ContainerDied","Data":"4c2b54476c20da2ce9d31e8bbc02c79749359ed79ff6cb1f2ce317d41ee148d1"} Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.304033 4817 scope.go:117] "RemoveContainer" containerID="5330434760f25f2c94af78d317cc8fe39d8023fe385ca07a932c261bba094853" Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.303998 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf5xj" Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.327816 4817 scope.go:117] "RemoveContainer" containerID="88473530c05c1b57e848574bc93692d59d67ff066b3db42e23b72b3a3b35e34c" Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.331362 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf5xj"] Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.343596 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf5xj"] Mar 14 05:49:35 crc kubenswrapper[4817]: I0314 05:49:35.343868 4817 scope.go:117] "RemoveContainer" containerID="af37f05811b23ce2ff6f5c0345a823544bd538145559a5d91d8f9f8fce11fb8a" Mar 14 05:49:36 crc kubenswrapper[4817]: I0314 05:49:36.740199 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" path="/var/lib/kubelet/pods/5807aefc-3eaf-4761-8888-da8fb0ff4fae/volumes" Mar 14 05:49:38 crc kubenswrapper[4817]: I0314 05:49:38.666975 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7v78tk" Mar 14 05:49:38 crc kubenswrapper[4817]: I0314 05:49:38.671994 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54dc5b8f8d-sbktt" Mar 14 05:49:48 crc kubenswrapper[4817]: I0314 05:49:48.388311 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" event={"ID":"9d193754-974c-4c1a-a142-28fc5f109935","Type":"ContainerStarted","Data":"8dcf8c0c420488549aa502980cd9a9c3acd2da10804ea62b27b7c7876ae59e92"} Mar 14 05:49:48 crc kubenswrapper[4817]: I0314 05:49:48.408351 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6dknb" podStartSLOduration=3.7541760330000002 podStartE2EDuration="1m6.408334537s" podCreationTimestamp="2026-03-14 05:48:42 +0000 UTC" firstStartedPulling="2026-03-14 05:48:44.621323401 +0000 UTC m=+978.659584147" lastFinishedPulling="2026-03-14 05:49:47.275481905 +0000 UTC m=+1041.313742651" observedRunningTime="2026-03-14 05:49:48.402539349 +0000 UTC m=+1042.440800095" watchObservedRunningTime="2026-03-14 05:49:48.408334537 +0000 UTC m=+1042.446595283" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.468986 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557790-cdbfp"] Mar 14 05:50:00 crc kubenswrapper[4817]: E0314 05:50:00.469944 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="extract-content" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.469959 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="extract-content" Mar 14 05:50:00 crc kubenswrapper[4817]: E0314 05:50:00.469984 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="registry-server" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.469990 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="registry-server" Mar 14 05:50:00 crc kubenswrapper[4817]: E0314 05:50:00.470001 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="extract-utilities" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.470011 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="extract-utilities" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.470149 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5807aefc-3eaf-4761-8888-da8fb0ff4fae" containerName="registry-server" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.470675 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.482572 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557790-cdbfp"] Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.485203 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.486308 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.487685 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.584954 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbx2\" (UniqueName: \"kubernetes.io/projected/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1-kube-api-access-jfbx2\") pod \"auto-csr-approver-29557790-cdbfp\" (UID: \"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1\") " pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.686954 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfbx2\" (UniqueName: \"kubernetes.io/projected/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1-kube-api-access-jfbx2\") pod \"auto-csr-approver-29557790-cdbfp\" (UID: \"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1\") " pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.718051 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfbx2\" (UniqueName: \"kubernetes.io/projected/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1-kube-api-access-jfbx2\") pod \"auto-csr-approver-29557790-cdbfp\" (UID: \"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1\") " pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:00 crc kubenswrapper[4817]: I0314 05:50:00.799032 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:01 crc kubenswrapper[4817]: I0314 05:50:01.233390 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557790-cdbfp"] Mar 14 05:50:01 crc kubenswrapper[4817]: I0314 05:50:01.479339 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" event={"ID":"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1","Type":"ContainerStarted","Data":"8aaaf1c0f9630d31ae707c473480928e4f4c5141845e7dda2dfc3062643439c4"} Mar 14 05:50:03 crc kubenswrapper[4817]: I0314 05:50:03.498760 4817 generic.go:334] "Generic (PLEG): container finished" podID="0edb65ab-7787-4204-bf56-c6ab3bc4e1f1" containerID="d02d971a1d9cc72a695ffa5496c50a92a43fd9b9c3e5064811cea5c3fe1396d6" exitCode=0 Mar 14 05:50:03 crc kubenswrapper[4817]: I0314 05:50:03.498852 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" event={"ID":"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1","Type":"ContainerDied","Data":"d02d971a1d9cc72a695ffa5496c50a92a43fd9b9c3e5064811cea5c3fe1396d6"} Mar 14 05:50:04 crc kubenswrapper[4817]: I0314 05:50:04.752931 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:04 crc kubenswrapper[4817]: I0314 05:50:04.852402 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfbx2\" (UniqueName: \"kubernetes.io/projected/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1-kube-api-access-jfbx2\") pod \"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1\" (UID: \"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1\") " Mar 14 05:50:04 crc kubenswrapper[4817]: I0314 05:50:04.859081 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1-kube-api-access-jfbx2" (OuterVolumeSpecName: "kube-api-access-jfbx2") pod "0edb65ab-7787-4204-bf56-c6ab3bc4e1f1" (UID: "0edb65ab-7787-4204-bf56-c6ab3bc4e1f1"). InnerVolumeSpecName "kube-api-access-jfbx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:50:04 crc kubenswrapper[4817]: I0314 05:50:04.953827 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfbx2\" (UniqueName: \"kubernetes.io/projected/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1-kube-api-access-jfbx2\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:05 crc kubenswrapper[4817]: I0314 05:50:05.511602 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" event={"ID":"0edb65ab-7787-4204-bf56-c6ab3bc4e1f1","Type":"ContainerDied","Data":"8aaaf1c0f9630d31ae707c473480928e4f4c5141845e7dda2dfc3062643439c4"} Mar 14 05:50:05 crc kubenswrapper[4817]: I0314 05:50:05.511645 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aaaf1c0f9630d31ae707c473480928e4f4c5141845e7dda2dfc3062643439c4" Mar 14 05:50:05 crc kubenswrapper[4817]: I0314 05:50:05.511653 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557790-cdbfp" Mar 14 05:50:05 crc kubenswrapper[4817]: I0314 05:50:05.814991 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557784-lv7vw"] Mar 14 05:50:05 crc kubenswrapper[4817]: I0314 05:50:05.820706 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557784-lv7vw"] Mar 14 05:50:06 crc kubenswrapper[4817]: I0314 05:50:06.739144 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af00ae3a-b371-44a4-80ad-6cf011ca952f" path="/var/lib/kubelet/pods/af00ae3a-b371-44a4-80ad-6cf011ca952f/volumes" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.018135 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-88fmz"] Mar 14 05:50:07 crc kubenswrapper[4817]: E0314 05:50:07.018439 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0edb65ab-7787-4204-bf56-c6ab3bc4e1f1" containerName="oc" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.018452 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0edb65ab-7787-4204-bf56-c6ab3bc4e1f1" containerName="oc" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.018579 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="0edb65ab-7787-4204-bf56-c6ab3bc4e1f1" containerName="oc" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.019239 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.021542 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-hrfkz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.023006 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.023008 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.028976 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.038507 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-88fmz"] Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.090740 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hr222"] Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.091832 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.094827 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.108087 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hr222"] Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.172622 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b83403-20ed-435f-b693-9bec1e143db1-config\") pod \"dnsmasq-dns-675f4bcbfc-88fmz\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.172682 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9f5\" (UniqueName: \"kubernetes.io/projected/c7b83403-20ed-435f-b693-9bec1e143db1-kube-api-access-tr9f5\") pod \"dnsmasq-dns-675f4bcbfc-88fmz\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.274146 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9f5\" (UniqueName: \"kubernetes.io/projected/c7b83403-20ed-435f-b693-9bec1e143db1-kube-api-access-tr9f5\") pod \"dnsmasq-dns-675f4bcbfc-88fmz\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.274215 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ctq9\" (UniqueName: \"kubernetes.io/projected/340de821-498b-4951-a770-43e885b0f981-kube-api-access-4ctq9\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.274247 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.274529 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-config\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.274665 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b83403-20ed-435f-b693-9bec1e143db1-config\") pod \"dnsmasq-dns-675f4bcbfc-88fmz\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.275551 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b83403-20ed-435f-b693-9bec1e143db1-config\") pod \"dnsmasq-dns-675f4bcbfc-88fmz\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.301162 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9f5\" (UniqueName: \"kubernetes.io/projected/c7b83403-20ed-435f-b693-9bec1e143db1-kube-api-access-tr9f5\") pod \"dnsmasq-dns-675f4bcbfc-88fmz\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.335972 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.376153 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-config\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.376242 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ctq9\" (UniqueName: \"kubernetes.io/projected/340de821-498b-4951-a770-43e885b0f981-kube-api-access-4ctq9\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.376272 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.377215 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.378214 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-config\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.394193 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ctq9\" (UniqueName: \"kubernetes.io/projected/340de821-498b-4951-a770-43e885b0f981-kube-api-access-4ctq9\") pod \"dnsmasq-dns-78dd6ddcc-hr222\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.407783 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.813073 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-88fmz"] Mar 14 05:50:07 crc kubenswrapper[4817]: W0314 05:50:07.816839 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b83403_20ed_435f_b693_9bec1e143db1.slice/crio-339690da31d0468d8b59c2cf3ecc3005de6d6dd500180654ff687810cbe86aa5 WatchSource:0}: Error finding container 339690da31d0468d8b59c2cf3ecc3005de6d6dd500180654ff687810cbe86aa5: Status 404 returned error can't find the container with id 339690da31d0468d8b59c2cf3ecc3005de6d6dd500180654ff687810cbe86aa5 Mar 14 05:50:07 crc kubenswrapper[4817]: I0314 05:50:07.883026 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hr222"] Mar 14 05:50:08 crc kubenswrapper[4817]: I0314 05:50:08.565879 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:50:08 crc kubenswrapper[4817]: I0314 05:50:08.565945 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:50:08 crc kubenswrapper[4817]: I0314 05:50:08.689735 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" event={"ID":"c7b83403-20ed-435f-b693-9bec1e143db1","Type":"ContainerStarted","Data":"339690da31d0468d8b59c2cf3ecc3005de6d6dd500180654ff687810cbe86aa5"} Mar 14 05:50:08 crc kubenswrapper[4817]: I0314 05:50:08.692433 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" event={"ID":"340de821-498b-4951-a770-43e885b0f981","Type":"ContainerStarted","Data":"9d05a1ef5dd5d2bfc279b5a71c53f65baebcf9bc0be09bae2062bc8b58a1ae54"} Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.874632 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-88fmz"] Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.910259 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-g9pvt"] Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.911352 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.928398 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-g9pvt"] Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.977790 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-config\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.977856 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhns\" (UniqueName: \"kubernetes.io/projected/c1a7c645-ea1c-4465-ae67-c995333c1dec-kube-api-access-kfhns\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:09 crc kubenswrapper[4817]: I0314 05:50:09.977932 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.079392 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-config\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.079471 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhns\" (UniqueName: \"kubernetes.io/projected/c1a7c645-ea1c-4465-ae67-c995333c1dec-kube-api-access-kfhns\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.079548 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.080610 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.081297 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-config\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.114079 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhns\" (UniqueName: \"kubernetes.io/projected/c1a7c645-ea1c-4465-ae67-c995333c1dec-kube-api-access-kfhns\") pod \"dnsmasq-dns-5ccc8479f9-g9pvt\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.201399 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hr222"] Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.246188 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.266906 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rr7f"] Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.268128 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.282071 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95cdk\" (UniqueName: \"kubernetes.io/projected/745d9a05-a9f2-4651-8ee2-524fc96e0c40-kube-api-access-95cdk\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.282176 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.282201 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-config\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.289019 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rr7f"] Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.386336 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.386384 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-config\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.386457 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95cdk\" (UniqueName: \"kubernetes.io/projected/745d9a05-a9f2-4651-8ee2-524fc96e0c40-kube-api-access-95cdk\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.388087 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.388585 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-config\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.412576 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95cdk\" (UniqueName: \"kubernetes.io/projected/745d9a05-a9f2-4651-8ee2-524fc96e0c40-kube-api-access-95cdk\") pod \"dnsmasq-dns-57d769cc4f-4rr7f\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.608510 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:10 crc kubenswrapper[4817]: I0314 05:50:10.875138 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-g9pvt"] Mar 14 05:50:10 crc kubenswrapper[4817]: W0314 05:50:10.882080 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a7c645_ea1c_4465_ae67_c995333c1dec.slice/crio-57bbd863e59bf4e4d83a11c5c7aef7af077adf14d1e13b0f4814151d8e3d3497 WatchSource:0}: Error finding container 57bbd863e59bf4e4d83a11c5c7aef7af077adf14d1e13b0f4814151d8e3d3497: Status 404 returned error can't find the container with id 57bbd863e59bf4e4d83a11c5c7aef7af077adf14d1e13b0f4814151d8e3d3497 Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.065172 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.066536 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.070519 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.070580 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.070519 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.070738 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.070772 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.071298 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-84h2h" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.071549 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.079855 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110361 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xht8v\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-kube-api-access-xht8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110650 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110696 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110721 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110780 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f20e21c5-3d26-4494-a4d7-43323e059f31-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110816 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f20e21c5-3d26-4494-a4d7-43323e059f31-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110859 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110887 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110927 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110964 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.110987 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.136495 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rr7f"] Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.212856 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f20e21c5-3d26-4494-a4d7-43323e059f31-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.213729 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.213998 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214070 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214130 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214157 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214422 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xht8v\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-kube-api-access-xht8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214556 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214594 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214616 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214682 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f20e21c5-3d26-4494-a4d7-43323e059f31-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.214926 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.215231 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.215276 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.215432 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.215927 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.216304 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.219662 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f20e21c5-3d26-4494-a4d7-43323e059f31-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.220954 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.231948 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.236498 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xht8v\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-kube-api-access-xht8v\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.237400 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f20e21c5-3d26-4494-a4d7-43323e059f31-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.252602 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.402124 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.424368 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.426749 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.430520 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vcfmf" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.430758 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.430813 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.432783 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.433089 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.433452 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.436574 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.437062 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522049 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522102 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vslw5\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-kube-api-access-vslw5\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522132 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522220 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522293 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522342 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522382 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522404 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522433 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522541 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.522623 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.624838 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.624909 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.624945 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.624979 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625007 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625035 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625057 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vslw5\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-kube-api-access-vslw5\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625078 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625124 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625154 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.625195 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.628791 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.629060 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.631200 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.631815 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.631971 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.633631 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.636519 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.639112 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.640115 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.648793 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.655180 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vslw5\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-kube-api-access-vslw5\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.661253 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " pod="openstack/rabbitmq-server-0" Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.731765 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" event={"ID":"745d9a05-a9f2-4651-8ee2-524fc96e0c40","Type":"ContainerStarted","Data":"4bf08670ce743406d325f4934b71afd1e509b43042011a3ecf0bc1cf1d35d323"} Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.733746 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" event={"ID":"c1a7c645-ea1c-4465-ae67-c995333c1dec","Type":"ContainerStarted","Data":"57bbd863e59bf4e4d83a11c5c7aef7af077adf14d1e13b0f4814151d8e3d3497"} Mar 14 05:50:11 crc kubenswrapper[4817]: I0314 05:50:11.745774 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.060068 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:50:12 crc kubenswrapper[4817]: W0314 05:50:12.090661 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf20e21c5_3d26_4494_a4d7_43323e059f31.slice/crio-0c0a06beec4e579f33b9eac144b521ad83125abef800f71f4b58ba7c01d82664 WatchSource:0}: Error finding container 0c0a06beec4e579f33b9eac144b521ad83125abef800f71f4b58ba7c01d82664: Status 404 returned error can't find the container with id 0c0a06beec4e579f33b9eac144b521ad83125abef800f71f4b58ba7c01d82664 Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.231696 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.412997 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.414470 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.418937 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.420148 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-98ddx" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.421168 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.421483 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.432963 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.444458 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542303 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099647fc-5cd6-4547-9400-8df4b6016b50-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542594 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/099647fc-5cd6-4547-9400-8df4b6016b50-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542620 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/099647fc-5cd6-4547-9400-8df4b6016b50-config-data-generated\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542638 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28cj\" (UniqueName: \"kubernetes.io/projected/099647fc-5cd6-4547-9400-8df4b6016b50-kube-api-access-f28cj\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542679 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-kolla-config\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542696 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542713 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-operator-scripts\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.542742 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-config-data-default\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644417 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/099647fc-5cd6-4547-9400-8df4b6016b50-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644494 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/099647fc-5cd6-4547-9400-8df4b6016b50-config-data-generated\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644516 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f28cj\" (UniqueName: \"kubernetes.io/projected/099647fc-5cd6-4547-9400-8df4b6016b50-kube-api-access-f28cj\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644558 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-kolla-config\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644576 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644593 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-operator-scripts\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644623 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-config-data-default\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.644641 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099647fc-5cd6-4547-9400-8df4b6016b50-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.649295 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.649598 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/099647fc-5cd6-4547-9400-8df4b6016b50-config-data-generated\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.649658 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-kolla-config\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.650537 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099647fc-5cd6-4547-9400-8df4b6016b50-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.651478 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-config-data-default\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.652027 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/099647fc-5cd6-4547-9400-8df4b6016b50-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.656186 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/099647fc-5cd6-4547-9400-8df4b6016b50-operator-scripts\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.681515 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f28cj\" (UniqueName: \"kubernetes.io/projected/099647fc-5cd6-4547-9400-8df4b6016b50-kube-api-access-f28cj\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.699608 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"099647fc-5cd6-4547-9400-8df4b6016b50\") " pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.743446 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.864181 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f20e21c5-3d26-4494-a4d7-43323e059f31","Type":"ContainerStarted","Data":"0c0a06beec4e579f33b9eac144b521ad83125abef800f71f4b58ba7c01d82664"} Mar 14 05:50:12 crc kubenswrapper[4817]: I0314 05:50:12.864223 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa3ffb28-ad8e-4691-a5ff-ae17d083a019","Type":"ContainerStarted","Data":"83903c220563b6bc263ea2d00bb8243b6096e975d3c43ae9398fc9f15b3f99b2"} Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.396490 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.698574 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.700316 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.706175 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.710390 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ndk6p" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.710576 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.710907 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.711072 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.860214 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"099647fc-5cd6-4547-9400-8df4b6016b50","Type":"ContainerStarted","Data":"8a1a08abb7dd1d87648545002aa78742395d7a41427714c0e7f6d89ac0df1415"} Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.887819 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lfdq\" (UniqueName: \"kubernetes.io/projected/29e96de0-75d9-4da0-a41e-3b93a7274083-kube-api-access-5lfdq\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.887883 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e96de0-75d9-4da0-a41e-3b93a7274083-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.887933 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e96de0-75d9-4da0-a41e-3b93a7274083-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.887980 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29e96de0-75d9-4da0-a41e-3b93a7274083-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.887996 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.888246 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.888341 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.888491 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.965623 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.966617 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.968841 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-66zxb" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.970574 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.970692 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.993128 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994377 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994426 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994498 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994550 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lfdq\" (UniqueName: \"kubernetes.io/projected/29e96de0-75d9-4da0-a41e-3b93a7274083-kube-api-access-5lfdq\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994604 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e96de0-75d9-4da0-a41e-3b93a7274083-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994661 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e96de0-75d9-4da0-a41e-3b93a7274083-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994720 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29e96de0-75d9-4da0-a41e-3b93a7274083-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.994745 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.997886 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.998479 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:13 crc kubenswrapper[4817]: I0314 05:50:13.998910 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29e96de0-75d9-4da0-a41e-3b93a7274083-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.000362 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.007939 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e96de0-75d9-4da0-a41e-3b93a7274083-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.005779 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29e96de0-75d9-4da0-a41e-3b93a7274083-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.017813 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lfdq\" (UniqueName: \"kubernetes.io/projected/29e96de0-75d9-4da0-a41e-3b93a7274083-kube-api-access-5lfdq\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.022545 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e96de0-75d9-4da0-a41e-3b93a7274083-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.064087 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"29e96de0-75d9-4da0-a41e-3b93a7274083\") " pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.100535 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.100707 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.100753 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-kolla-config\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.100909 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lx2t\" (UniqueName: \"kubernetes.io/projected/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-kube-api-access-6lx2t\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.100965 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-config-data\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.202252 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.202304 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.202338 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-kolla-config\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.202378 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lx2t\" (UniqueName: \"kubernetes.io/projected/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-kube-api-access-6lx2t\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.202431 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-config-data\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.203148 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-kolla-config\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.204623 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-config-data\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.207296 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.209039 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.222450 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lx2t\" (UniqueName: \"kubernetes.io/projected/0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e-kube-api-access-6lx2t\") pod \"memcached-0\" (UID: \"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e\") " pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.294837 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 14 05:50:14 crc kubenswrapper[4817]: I0314 05:50:14.330118 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.233821 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.234927 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.237705 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gwxxp" Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.239754 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.339468 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9czd\" (UniqueName: \"kubernetes.io/projected/480e01af-fe53-4341-9987-b53552c7b77f-kube-api-access-x9czd\") pod \"kube-state-metrics-0\" (UID: \"480e01af-fe53-4341-9987-b53552c7b77f\") " pod="openstack/kube-state-metrics-0" Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.440497 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9czd\" (UniqueName: \"kubernetes.io/projected/480e01af-fe53-4341-9987-b53552c7b77f-kube-api-access-x9czd\") pod \"kube-state-metrics-0\" (UID: \"480e01af-fe53-4341-9987-b53552c7b77f\") " pod="openstack/kube-state-metrics-0" Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.459505 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9czd\" (UniqueName: \"kubernetes.io/projected/480e01af-fe53-4341-9987-b53552c7b77f-kube-api-access-x9czd\") pod \"kube-state-metrics-0\" (UID: \"480e01af-fe53-4341-9987-b53552c7b77f\") " pod="openstack/kube-state-metrics-0" Mar 14 05:50:16 crc kubenswrapper[4817]: I0314 05:50:16.569506 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.667165 4817 scope.go:117] "RemoveContainer" containerID="26ee2e61994329679d241da2812a3f6a410ea5053c4861dbc7a26145403a6a22" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.823070 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nsn5q"] Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.824066 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.826313 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.826469 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.826508 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ms4jm" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.835005 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nsn5q"] Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.842282 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rj9cw"] Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.843770 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:19 crc kubenswrapper[4817]: I0314 05:50:19.903601 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rj9cw"] Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002551 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-log-ovn\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002619 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-run\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002641 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9790d6d0-9013-42cf-bb3d-394f5fc292ba-ovn-controller-tls-certs\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002672 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-lib\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002700 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/900c734a-f840-4877-98fd-ff1415d6ad18-scripts\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002738 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9790d6d0-9013-42cf-bb3d-394f5fc292ba-scripts\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002758 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-log\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002775 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzzsm\" (UniqueName: \"kubernetes.io/projected/9790d6d0-9013-42cf-bb3d-394f5fc292ba-kube-api-access-lzzsm\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002792 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-run-ovn\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002808 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790d6d0-9013-42cf-bb3d-394f5fc292ba-combined-ca-bundle\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002847 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-run\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002877 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-etc-ovs\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.002906 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpvk\" (UniqueName: \"kubernetes.io/projected/900c734a-f840-4877-98fd-ff1415d6ad18-kube-api-access-zzpvk\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103730 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-run\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103808 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-etc-ovs\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103858 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpvk\" (UniqueName: \"kubernetes.io/projected/900c734a-f840-4877-98fd-ff1415d6ad18-kube-api-access-zzpvk\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103915 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-log-ovn\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103943 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-run\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103959 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9790d6d0-9013-42cf-bb3d-394f5fc292ba-ovn-controller-tls-certs\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.103988 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-lib\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104005 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/900c734a-f840-4877-98fd-ff1415d6ad18-scripts\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104045 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9790d6d0-9013-42cf-bb3d-394f5fc292ba-scripts\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104066 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-log\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104085 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzzsm\" (UniqueName: \"kubernetes.io/projected/9790d6d0-9013-42cf-bb3d-394f5fc292ba-kube-api-access-lzzsm\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104108 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-run-ovn\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104133 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790d6d0-9013-42cf-bb3d-394f5fc292ba-combined-ca-bundle\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104322 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-etc-ovs\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104427 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-run\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104491 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-run-ovn\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104536 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9790d6d0-9013-42cf-bb3d-394f5fc292ba-var-log-ovn\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104566 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-log\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104556 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-run\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.104629 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/900c734a-f840-4877-98fd-ff1415d6ad18-var-lib\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.106453 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/900c734a-f840-4877-98fd-ff1415d6ad18-scripts\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.106928 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9790d6d0-9013-42cf-bb3d-394f5fc292ba-scripts\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.117489 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9790d6d0-9013-42cf-bb3d-394f5fc292ba-ovn-controller-tls-certs\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.124985 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzzsm\" (UniqueName: \"kubernetes.io/projected/9790d6d0-9013-42cf-bb3d-394f5fc292ba-kube-api-access-lzzsm\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.137251 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790d6d0-9013-42cf-bb3d-394f5fc292ba-combined-ca-bundle\") pod \"ovn-controller-nsn5q\" (UID: \"9790d6d0-9013-42cf-bb3d-394f5fc292ba\") " pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.140069 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.145617 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpvk\" (UniqueName: \"kubernetes.io/projected/900c734a-f840-4877-98fd-ff1415d6ad18-kube-api-access-zzpvk\") pod \"ovn-controller-ovs-rj9cw\" (UID: \"900c734a-f840-4877-98fd-ff1415d6ad18\") " pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.160289 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.689640 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.690811 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.693053 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.694059 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.694297 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.694499 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.699318 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4nfd6" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.713141 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813448 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/523f27ff-3994-4742-af55-15befc50017e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813492 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/523f27ff-3994-4742-af55-15befc50017e-config\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813517 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813695 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/523f27ff-3994-4742-af55-15befc50017e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813763 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813835 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.813989 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgsm\" (UniqueName: \"kubernetes.io/projected/523f27ff-3994-4742-af55-15befc50017e-kube-api-access-jhgsm\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.814041 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.915746 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.915806 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgsm\" (UniqueName: \"kubernetes.io/projected/523f27ff-3994-4742-af55-15befc50017e-kube-api-access-jhgsm\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.915959 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/523f27ff-3994-4742-af55-15befc50017e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.915991 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/523f27ff-3994-4742-af55-15befc50017e-config\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.916029 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.916128 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/523f27ff-3994-4742-af55-15befc50017e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.916170 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.916241 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.916858 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/523f27ff-3994-4742-af55-15befc50017e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.916951 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/523f27ff-3994-4742-af55-15befc50017e-config\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.917246 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.917406 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/523f27ff-3994-4742-af55-15befc50017e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.920249 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.921123 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.922151 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/523f27ff-3994-4742-af55-15befc50017e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.939190 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:20 crc kubenswrapper[4817]: I0314 05:50:20.939462 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgsm\" (UniqueName: \"kubernetes.io/projected/523f27ff-3994-4742-af55-15befc50017e-kube-api-access-jhgsm\") pod \"ovsdbserver-sb-0\" (UID: \"523f27ff-3994-4742-af55-15befc50017e\") " pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:21 crc kubenswrapper[4817]: I0314 05:50:21.008609 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.550935 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.552375 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.555951 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.556326 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.556549 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.556716 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gnmg9" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.556855 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751086 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751174 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751243 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751324 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d70fa19-d760-4bb0-b182-c1bcf7797f96-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751353 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d70fa19-d760-4bb0-b182-c1bcf7797f96-config\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751406 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5d70fa19-d760-4bb0-b182-c1bcf7797f96-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751503 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.751564 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhnc\" (UniqueName: \"kubernetes.io/projected/5d70fa19-d760-4bb0-b182-c1bcf7797f96-kube-api-access-rlhnc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.852984 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853055 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853083 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853139 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d70fa19-d760-4bb0-b182-c1bcf7797f96-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853164 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d70fa19-d760-4bb0-b182-c1bcf7797f96-config\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853217 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5d70fa19-d760-4bb0-b182-c1bcf7797f96-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853261 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853321 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlhnc\" (UniqueName: \"kubernetes.io/projected/5d70fa19-d760-4bb0-b182-c1bcf7797f96-kube-api-access-rlhnc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.853702 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.854438 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5d70fa19-d760-4bb0-b182-c1bcf7797f96-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.854842 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d70fa19-d760-4bb0-b182-c1bcf7797f96-config\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.856066 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d70fa19-d760-4bb0-b182-c1bcf7797f96-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.858292 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.859820 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.862377 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d70fa19-d760-4bb0-b182-c1bcf7797f96-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.874827 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlhnc\" (UniqueName: \"kubernetes.io/projected/5d70fa19-d760-4bb0-b182-c1bcf7797f96-kube-api-access-rlhnc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:22 crc kubenswrapper[4817]: I0314 05:50:22.881001 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5d70fa19-d760-4bb0-b182-c1bcf7797f96\") " pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:23 crc kubenswrapper[4817]: I0314 05:50:23.185862 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:33 crc kubenswrapper[4817]: E0314 05:50:33.492102 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 14 05:50:33 crc kubenswrapper[4817]: E0314 05:50:33.492854 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xht8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f20e21c5-3d26-4494-a4d7-43323e059f31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:50:33 crc kubenswrapper[4817]: E0314 05:50:33.494431 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" Mar 14 05:50:33 crc kubenswrapper[4817]: E0314 05:50:33.518504 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 14 05:50:33 crc kubenswrapper[4817]: E0314 05:50:33.518697 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vslw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(aa3ffb28-ad8e-4691-a5ff-ae17d083a019): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:50:33 crc kubenswrapper[4817]: E0314 05:50:33.520445 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" Mar 14 05:50:34 crc kubenswrapper[4817]: E0314 05:50:34.025875 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" Mar 14 05:50:34 crc kubenswrapper[4817]: E0314 05:50:34.026019 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" Mar 14 05:50:38 crc kubenswrapper[4817]: I0314 05:50:38.565537 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:50:38 crc kubenswrapper[4817]: I0314 05:50:38.565927 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.833116 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.833566 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfhns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-g9pvt_openstack(c1a7c645-ea1c-4465-ae67-c995333c1dec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.834794 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" podUID="c1a7c645-ea1c-4465-ae67-c995333c1dec" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.958318 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.958495 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95cdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4rr7f_openstack(745d9a05-a9f2-4651-8ee2-524fc96e0c40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.959669 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" podUID="745d9a05-a9f2-4651-8ee2-524fc96e0c40" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.980710 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.981055 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ctq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-hr222_openstack(340de821-498b-4951-a770-43e885b0f981): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:50:39 crc kubenswrapper[4817]: E0314 05:50:39.982933 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" podUID="340de821-498b-4951-a770-43e885b0f981" Mar 14 05:50:40 crc kubenswrapper[4817]: E0314 05:50:40.018558 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 14 05:50:40 crc kubenswrapper[4817]: E0314 05:50:40.018744 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr9f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-88fmz_openstack(c7b83403-20ed-435f-b693-9bec1e143db1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:50:40 crc kubenswrapper[4817]: E0314 05:50:40.019969 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" podUID="c7b83403-20ed-435f-b693-9bec1e143db1" Mar 14 05:50:40 crc kubenswrapper[4817]: E0314 05:50:40.086152 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" podUID="745d9a05-a9f2-4651-8ee2-524fc96e0c40" Mar 14 05:50:40 crc kubenswrapper[4817]: E0314 05:50:40.086241 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" podUID="c1a7c645-ea1c-4465-ae67-c995333c1dec" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.459935 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 05:50:40 crc kubenswrapper[4817]: W0314 05:50:40.461321 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29e96de0_75d9_4da0_a41e_3b93a7274083.slice/crio-fb7246af30ace80a395d484e47fb668218985dc1ad325c1b23c469c25e69da44 WatchSource:0}: Error finding container fb7246af30ace80a395d484e47fb668218985dc1ad325c1b23c469c25e69da44: Status 404 returned error can't find the container with id fb7246af30ace80a395d484e47fb668218985dc1ad325c1b23c469c25e69da44 Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.657270 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.668267 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.670061 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.682859 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.683253 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nsn5q"] Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.690174 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.702245 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9f5\" (UniqueName: \"kubernetes.io/projected/c7b83403-20ed-435f-b693-9bec1e143db1-kube-api-access-tr9f5\") pod \"c7b83403-20ed-435f-b693-9bec1e143db1\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.703572 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ctq9\" (UniqueName: \"kubernetes.io/projected/340de821-498b-4951-a770-43e885b0f981-kube-api-access-4ctq9\") pod \"340de821-498b-4951-a770-43e885b0f981\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.703635 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-dns-svc\") pod \"340de821-498b-4951-a770-43e885b0f981\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.703699 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-config\") pod \"340de821-498b-4951-a770-43e885b0f981\" (UID: \"340de821-498b-4951-a770-43e885b0f981\") " Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.703751 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b83403-20ed-435f-b693-9bec1e143db1-config\") pod \"c7b83403-20ed-435f-b693-9bec1e143db1\" (UID: \"c7b83403-20ed-435f-b693-9bec1e143db1\") " Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.704446 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b83403-20ed-435f-b693-9bec1e143db1-config" (OuterVolumeSpecName: "config") pod "c7b83403-20ed-435f-b693-9bec1e143db1" (UID: "c7b83403-20ed-435f-b693-9bec1e143db1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.704488 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "340de821-498b-4951-a770-43e885b0f981" (UID: "340de821-498b-4951-a770-43e885b0f981"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.704639 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-config" (OuterVolumeSpecName: "config") pod "340de821-498b-4951-a770-43e885b0f981" (UID: "340de821-498b-4951-a770-43e885b0f981"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.710487 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340de821-498b-4951-a770-43e885b0f981-kube-api-access-4ctq9" (OuterVolumeSpecName: "kube-api-access-4ctq9") pod "340de821-498b-4951-a770-43e885b0f981" (UID: "340de821-498b-4951-a770-43e885b0f981"). InnerVolumeSpecName "kube-api-access-4ctq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.716282 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b83403-20ed-435f-b693-9bec1e143db1-kube-api-access-tr9f5" (OuterVolumeSpecName: "kube-api-access-tr9f5") pod "c7b83403-20ed-435f-b693-9bec1e143db1" (UID: "c7b83403-20ed-435f-b693-9bec1e143db1"). InnerVolumeSpecName "kube-api-access-tr9f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.778134 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 05:50:40 crc kubenswrapper[4817]: W0314 05:50:40.781474 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod523f27ff_3994_4742_af55_15befc50017e.slice/crio-86cc449a0c8eca8ad80c92aaed9215ee5fdf32e3de4de6261d5e53e7cbe35d84 WatchSource:0}: Error finding container 86cc449a0c8eca8ad80c92aaed9215ee5fdf32e3de4de6261d5e53e7cbe35d84: Status 404 returned error can't find the container with id 86cc449a0c8eca8ad80c92aaed9215ee5fdf32e3de4de6261d5e53e7cbe35d84 Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.807956 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr9f5\" (UniqueName: \"kubernetes.io/projected/c7b83403-20ed-435f-b693-9bec1e143db1-kube-api-access-tr9f5\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.807986 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ctq9\" (UniqueName: \"kubernetes.io/projected/340de821-498b-4951-a770-43e885b0f981-kube-api-access-4ctq9\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.808000 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.808009 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340de821-498b-4951-a770-43e885b0f981-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.808018 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b83403-20ed-435f-b693-9bec1e143db1-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:40 crc kubenswrapper[4817]: I0314 05:50:40.891633 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 05:50:40 crc kubenswrapper[4817]: W0314 05:50:40.894995 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d70fa19_d760_4bb0_b182_c1bcf7797f96.slice/crio-2d436b493c89fce8bebb8c58ae3fd6b19336b170abe2ebd10dacf3b58a0d6e42 WatchSource:0}: Error finding container 2d436b493c89fce8bebb8c58ae3fd6b19336b170abe2ebd10dacf3b58a0d6e42: Status 404 returned error can't find the container with id 2d436b493c89fce8bebb8c58ae3fd6b19336b170abe2ebd10dacf3b58a0d6e42 Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.090782 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" event={"ID":"340de821-498b-4951-a770-43e885b0f981","Type":"ContainerDied","Data":"9d05a1ef5dd5d2bfc279b5a71c53f65baebcf9bc0be09bae2062bc8b58a1ae54"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.090859 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hr222" Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.103031 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"099647fc-5cd6-4547-9400-8df4b6016b50","Type":"ContainerStarted","Data":"39161a11e52ce46686ea4d2d0764d3477bbfb2bf6355442ebe0d08da79fc33d8"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.107117 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"480e01af-fe53-4341-9987-b53552c7b77f","Type":"ContainerStarted","Data":"954382abe2b87fecbbe41e60074c8de6496242dcfebd8322a252d38eac2ca381"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.109222 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29e96de0-75d9-4da0-a41e-3b93a7274083","Type":"ContainerStarted","Data":"2150a9cacbd197fbdf12c75612b03633cc299f953c39ef8ef58947d84367669d"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.109254 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29e96de0-75d9-4da0-a41e-3b93a7274083","Type":"ContainerStarted","Data":"fb7246af30ace80a395d484e47fb668218985dc1ad325c1b23c469c25e69da44"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.114160 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" event={"ID":"c7b83403-20ed-435f-b693-9bec1e143db1","Type":"ContainerDied","Data":"339690da31d0468d8b59c2cf3ecc3005de6d6dd500180654ff687810cbe86aa5"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.114262 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-88fmz" Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.116880 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nsn5q" event={"ID":"9790d6d0-9013-42cf-bb3d-394f5fc292ba","Type":"ContainerStarted","Data":"2213cc58429632e2e9765b95b59076c75de7089c15025c5a5fadb66d746a2d7b"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.120188 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5d70fa19-d760-4bb0-b182-c1bcf7797f96","Type":"ContainerStarted","Data":"2d436b493c89fce8bebb8c58ae3fd6b19336b170abe2ebd10dacf3b58a0d6e42"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.139109 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e","Type":"ContainerStarted","Data":"aeca8f6d4fd238a881c28fb8b58aa9b78ec0ce2a98d804e64560198506eeb9cd"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.140206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"523f27ff-3994-4742-af55-15befc50017e","Type":"ContainerStarted","Data":"86cc449a0c8eca8ad80c92aaed9215ee5fdf32e3de4de6261d5e53e7cbe35d84"} Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.166514 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hr222"] Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.175350 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hr222"] Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.202924 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-88fmz"] Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.210215 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-88fmz"] Mar 14 05:50:41 crc kubenswrapper[4817]: I0314 05:50:41.763570 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rj9cw"] Mar 14 05:50:42 crc kubenswrapper[4817]: W0314 05:50:42.101395 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod900c734a_f840_4877_98fd_ff1415d6ad18.slice/crio-60aefc7b5a0a1c57469d1198725d58b542416e85c1abd46021d1a03954a45902 WatchSource:0}: Error finding container 60aefc7b5a0a1c57469d1198725d58b542416e85c1abd46021d1a03954a45902: Status 404 returned error can't find the container with id 60aefc7b5a0a1c57469d1198725d58b542416e85c1abd46021d1a03954a45902 Mar 14 05:50:42 crc kubenswrapper[4817]: I0314 05:50:42.170347 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rj9cw" event={"ID":"900c734a-f840-4877-98fd-ff1415d6ad18","Type":"ContainerStarted","Data":"60aefc7b5a0a1c57469d1198725d58b542416e85c1abd46021d1a03954a45902"} Mar 14 05:50:42 crc kubenswrapper[4817]: I0314 05:50:42.746121 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340de821-498b-4951-a770-43e885b0f981" path="/var/lib/kubelet/pods/340de821-498b-4951-a770-43e885b0f981/volumes" Mar 14 05:50:42 crc kubenswrapper[4817]: I0314 05:50:42.746867 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b83403-20ed-435f-b693-9bec1e143db1" path="/var/lib/kubelet/pods/c7b83403-20ed-435f-b693-9bec1e143db1/volumes" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.209399 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-944q4"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.211073 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.221556 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.227694 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-944q4"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.356233 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e515005b-34d6-46cb-8486-cf2e09877f9d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.356277 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515005b-34d6-46cb-8486-cf2e09877f9d-combined-ca-bundle\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.356298 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrj2z\" (UniqueName: \"kubernetes.io/projected/e515005b-34d6-46cb-8486-cf2e09877f9d-kube-api-access-zrj2z\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.356333 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e515005b-34d6-46cb-8486-cf2e09877f9d-ovs-rundir\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.356377 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e515005b-34d6-46cb-8486-cf2e09877f9d-config\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.356404 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e515005b-34d6-46cb-8486-cf2e09877f9d-ovn-rundir\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.364859 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-g9pvt"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.394856 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2bk5q"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.398909 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.402886 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.411342 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2bk5q"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.458080 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e515005b-34d6-46cb-8486-cf2e09877f9d-ovs-rundir\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.458153 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e515005b-34d6-46cb-8486-cf2e09877f9d-config\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.458186 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e515005b-34d6-46cb-8486-cf2e09877f9d-ovn-rundir\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.458272 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515005b-34d6-46cb-8486-cf2e09877f9d-combined-ca-bundle\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.458293 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e515005b-34d6-46cb-8486-cf2e09877f9d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.458311 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrj2z\" (UniqueName: \"kubernetes.io/projected/e515005b-34d6-46cb-8486-cf2e09877f9d-kube-api-access-zrj2z\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.459464 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e515005b-34d6-46cb-8486-cf2e09877f9d-ovs-rundir\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.460454 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e515005b-34d6-46cb-8486-cf2e09877f9d-config\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.460590 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e515005b-34d6-46cb-8486-cf2e09877f9d-ovn-rundir\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.467065 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e515005b-34d6-46cb-8486-cf2e09877f9d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.471104 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515005b-34d6-46cb-8486-cf2e09877f9d-combined-ca-bundle\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.483233 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrj2z\" (UniqueName: \"kubernetes.io/projected/e515005b-34d6-46cb-8486-cf2e09877f9d-kube-api-access-zrj2z\") pod \"ovn-controller-metrics-944q4\" (UID: \"e515005b-34d6-46cb-8486-cf2e09877f9d\") " pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.542377 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-944q4" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.556114 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rr7f"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.563254 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.563356 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.563413 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-config\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.563528 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq97c\" (UniqueName: \"kubernetes.io/projected/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-kube-api-access-nq97c\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.577441 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-jwq5b"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.579532 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.581568 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.599198 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jwq5b"] Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666242 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq97c\" (UniqueName: \"kubernetes.io/projected/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-kube-api-access-nq97c\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666340 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666389 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-dns-svc\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666408 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666429 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6b2\" (UniqueName: \"kubernetes.io/projected/26e76627-0df6-400a-981f-8672983e6741-kube-api-access-rn6b2\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666476 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666514 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666552 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-config\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.666572 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-config\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.667746 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.668764 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-config\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.670053 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.692369 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq97c\" (UniqueName: \"kubernetes.io/projected/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-kube-api-access-nq97c\") pod \"dnsmasq-dns-6bc7876d45-2bk5q\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.717500 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.768325 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.768441 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-dns-svc\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.768473 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.768505 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6b2\" (UniqueName: \"kubernetes.io/projected/26e76627-0df6-400a-981f-8672983e6741-kube-api-access-rn6b2\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.768591 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-config\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.769390 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-config\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.769998 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.770618 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-dns-svc\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.772065 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.793440 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6b2\" (UniqueName: \"kubernetes.io/projected/26e76627-0df6-400a-981f-8672983e6741-kube-api-access-rn6b2\") pod \"dnsmasq-dns-8554648995-jwq5b\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:43 crc kubenswrapper[4817]: I0314 05:50:43.919753 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.113629 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.239246 4817 generic.go:334] "Generic (PLEG): container finished" podID="099647fc-5cd6-4547-9400-8df4b6016b50" containerID="39161a11e52ce46686ea4d2d0764d3477bbfb2bf6355442ebe0d08da79fc33d8" exitCode=0 Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.239337 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"099647fc-5cd6-4547-9400-8df4b6016b50","Type":"ContainerDied","Data":"39161a11e52ce46686ea4d2d0764d3477bbfb2bf6355442ebe0d08da79fc33d8"} Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.242452 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" event={"ID":"745d9a05-a9f2-4651-8ee2-524fc96e0c40","Type":"ContainerDied","Data":"4bf08670ce743406d325f4934b71afd1e509b43042011a3ecf0bc1cf1d35d323"} Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.242499 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4rr7f" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.276698 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-dns-svc\") pod \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.276814 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95cdk\" (UniqueName: \"kubernetes.io/projected/745d9a05-a9f2-4651-8ee2-524fc96e0c40-kube-api-access-95cdk\") pod \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.276866 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-config\") pod \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\" (UID: \"745d9a05-a9f2-4651-8ee2-524fc96e0c40\") " Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.277457 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "745d9a05-a9f2-4651-8ee2-524fc96e0c40" (UID: "745d9a05-a9f2-4651-8ee2-524fc96e0c40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.277794 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-config" (OuterVolumeSpecName: "config") pod "745d9a05-a9f2-4651-8ee2-524fc96e0c40" (UID: "745d9a05-a9f2-4651-8ee2-524fc96e0c40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.282295 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745d9a05-a9f2-4651-8ee2-524fc96e0c40-kube-api-access-95cdk" (OuterVolumeSpecName: "kube-api-access-95cdk") pod "745d9a05-a9f2-4651-8ee2-524fc96e0c40" (UID: "745d9a05-a9f2-4651-8ee2-524fc96e0c40"). InnerVolumeSpecName "kube-api-access-95cdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.380006 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.380042 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95cdk\" (UniqueName: \"kubernetes.io/projected/745d9a05-a9f2-4651-8ee2-524fc96e0c40-kube-api-access-95cdk\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.380056 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/745d9a05-a9f2-4651-8ee2-524fc96e0c40-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.599792 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rr7f"] Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.607573 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4rr7f"] Mar 14 05:50:44 crc kubenswrapper[4817]: I0314 05:50:44.742490 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="745d9a05-a9f2-4651-8ee2-524fc96e0c40" path="/var/lib/kubelet/pods/745d9a05-a9f2-4651-8ee2-524fc96e0c40/volumes" Mar 14 05:50:45 crc kubenswrapper[4817]: I0314 05:50:45.259242 4817 generic.go:334] "Generic (PLEG): container finished" podID="29e96de0-75d9-4da0-a41e-3b93a7274083" containerID="2150a9cacbd197fbdf12c75612b03633cc299f953c39ef8ef58947d84367669d" exitCode=0 Mar 14 05:50:45 crc kubenswrapper[4817]: I0314 05:50:45.259291 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29e96de0-75d9-4da0-a41e-3b93a7274083","Type":"ContainerDied","Data":"2150a9cacbd197fbdf12c75612b03633cc299f953c39ef8ef58947d84367669d"} Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.158848 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.225493 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfhns\" (UniqueName: \"kubernetes.io/projected/c1a7c645-ea1c-4465-ae67-c995333c1dec-kube-api-access-kfhns\") pod \"c1a7c645-ea1c-4465-ae67-c995333c1dec\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.225558 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-config\") pod \"c1a7c645-ea1c-4465-ae67-c995333c1dec\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.225762 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-dns-svc\") pod \"c1a7c645-ea1c-4465-ae67-c995333c1dec\" (UID: \"c1a7c645-ea1c-4465-ae67-c995333c1dec\") " Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.226553 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1a7c645-ea1c-4465-ae67-c995333c1dec" (UID: "c1a7c645-ea1c-4465-ae67-c995333c1dec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.226827 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-config" (OuterVolumeSpecName: "config") pod "c1a7c645-ea1c-4465-ae67-c995333c1dec" (UID: "c1a7c645-ea1c-4465-ae67-c995333c1dec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.234262 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a7c645-ea1c-4465-ae67-c995333c1dec-kube-api-access-kfhns" (OuterVolumeSpecName: "kube-api-access-kfhns") pod "c1a7c645-ea1c-4465-ae67-c995333c1dec" (UID: "c1a7c645-ea1c-4465-ae67-c995333c1dec"). InnerVolumeSpecName "kube-api-access-kfhns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.277185 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" event={"ID":"c1a7c645-ea1c-4465-ae67-c995333c1dec","Type":"ContainerDied","Data":"57bbd863e59bf4e4d83a11c5c7aef7af077adf14d1e13b0f4814151d8e3d3497"} Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.277250 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-g9pvt" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.331801 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfhns\" (UniqueName: \"kubernetes.io/projected/c1a7c645-ea1c-4465-ae67-c995333c1dec-kube-api-access-kfhns\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.331859 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.331871 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a7c645-ea1c-4465-ae67-c995333c1dec-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.349515 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-g9pvt"] Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.358343 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-g9pvt"] Mar 14 05:50:46 crc kubenswrapper[4817]: I0314 05:50:46.744437 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a7c645-ea1c-4465-ae67-c995333c1dec" path="/var/lib/kubelet/pods/c1a7c645-ea1c-4465-ae67-c995333c1dec/volumes" Mar 14 05:50:48 crc kubenswrapper[4817]: I0314 05:50:48.261799 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-944q4"] Mar 14 05:50:48 crc kubenswrapper[4817]: I0314 05:50:48.341738 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jwq5b"] Mar 14 05:50:48 crc kubenswrapper[4817]: I0314 05:50:48.353193 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2bk5q"] Mar 14 05:50:48 crc kubenswrapper[4817]: W0314 05:50:48.494793 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode515005b_34d6_46cb_8486_cf2e09877f9d.slice/crio-beeb99ba3c6c65e20f76a489a62f0b7d37b310849eb750267a55adb7c2a0a38e WatchSource:0}: Error finding container beeb99ba3c6c65e20f76a489a62f0b7d37b310849eb750267a55adb7c2a0a38e: Status 404 returned error can't find the container with id beeb99ba3c6c65e20f76a489a62f0b7d37b310849eb750267a55adb7c2a0a38e Mar 14 05:50:48 crc kubenswrapper[4817]: W0314 05:50:48.499979 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26e76627_0df6_400a_981f_8672983e6741.slice/crio-c639fa9492737899fdd888193124465d786915077d1709fbc6686613da26587a WatchSource:0}: Error finding container c639fa9492737899fdd888193124465d786915077d1709fbc6686613da26587a: Status 404 returned error can't find the container with id c639fa9492737899fdd888193124465d786915077d1709fbc6686613da26587a Mar 14 05:50:48 crc kubenswrapper[4817]: W0314 05:50:48.513324 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2925ed2d_4a13_4fc2_b62b_1bb73fd2f69c.slice/crio-347758eef57cdf90a397b68e2e3988d6724dcce2dd9601bec316c2c2f49e1b36 WatchSource:0}: Error finding container 347758eef57cdf90a397b68e2e3988d6724dcce2dd9601bec316c2c2f49e1b36: Status 404 returned error can't find the container with id 347758eef57cdf90a397b68e2e3988d6724dcce2dd9601bec316c2c2f49e1b36 Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.299143 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"523f27ff-3994-4742-af55-15befc50017e","Type":"ContainerStarted","Data":"d07d3aac410f35a6c454cb290c4e0f45c25a7bffe923250ec722e4a67908a8a6"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.301158 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"480e01af-fe53-4341-9987-b53552c7b77f","Type":"ContainerStarted","Data":"d36f4f8082cf3d4d2972ff1acee7869e2ee239fff5708afd8dc0e170648fe404"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.301282 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.303208 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nsn5q" event={"ID":"9790d6d0-9013-42cf-bb3d-394f5fc292ba","Type":"ContainerStarted","Data":"2609338b1d6636367e7198be560be3367ff65da17540b66d03da669bf8c6782b"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.303875 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nsn5q" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.305414 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5d70fa19-d760-4bb0-b182-c1bcf7797f96","Type":"ContainerStarted","Data":"08c4b047b7edf0930d618842ddf795939715b1faa3f20d0f45eae6f5bec31d52"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.307171 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e","Type":"ContainerStarted","Data":"893f86ea00c58650f06450adce9e1b9a75377df1fe2115e275e7b021b950cd80"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.307275 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.309068 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"099647fc-5cd6-4547-9400-8df4b6016b50","Type":"ContainerStarted","Data":"0e15b38ff156b65bc2b844648bdb611ccecbce2a4685ca1debb62da7f6d86859"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.310072 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" event={"ID":"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c","Type":"ContainerStarted","Data":"347758eef57cdf90a397b68e2e3988d6724dcce2dd9601bec316c2c2f49e1b36"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.311074 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jwq5b" event={"ID":"26e76627-0df6-400a-981f-8672983e6741","Type":"ContainerStarted","Data":"c639fa9492737899fdd888193124465d786915077d1709fbc6686613da26587a"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.312783 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rj9cw" event={"ID":"900c734a-f840-4877-98fd-ff1415d6ad18","Type":"ContainerStarted","Data":"59d643eac65bd72d6fe6ea025a635f634d4e7a4944e1f0273ca6e574e436c78f"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.316376 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.265338346 podStartE2EDuration="33.316360229s" podCreationTimestamp="2026-03-14 05:50:16 +0000 UTC" firstStartedPulling="2026-03-14 05:50:40.672247761 +0000 UTC m=+1094.710508507" lastFinishedPulling="2026-03-14 05:50:48.723269654 +0000 UTC m=+1102.761530390" observedRunningTime="2026-03-14 05:50:49.315248367 +0000 UTC m=+1103.353509123" watchObservedRunningTime="2026-03-14 05:50:49.316360229 +0000 UTC m=+1103.354620975" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.317369 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-944q4" event={"ID":"e515005b-34d6-46cb-8486-cf2e09877f9d","Type":"ContainerStarted","Data":"beeb99ba3c6c65e20f76a489a62f0b7d37b310849eb750267a55adb7c2a0a38e"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.321472 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"29e96de0-75d9-4da0-a41e-3b93a7274083","Type":"ContainerStarted","Data":"63fe6193e3a396b671e4ce91d28f3cda99d1608fc7a56998d9df569c1a1f2775"} Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.374425 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=30.190159522 podStartE2EDuration="36.374402573s" podCreationTimestamp="2026-03-14 05:50:13 +0000 UTC" firstStartedPulling="2026-03-14 05:50:40.668043849 +0000 UTC m=+1094.706304585" lastFinishedPulling="2026-03-14 05:50:46.85228689 +0000 UTC m=+1100.890547636" observedRunningTime="2026-03-14 05:50:49.336341168 +0000 UTC m=+1103.374601904" watchObservedRunningTime="2026-03-14 05:50:49.374402573 +0000 UTC m=+1103.412663309" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.375489 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nsn5q" podStartSLOduration=22.578368895 podStartE2EDuration="30.375482524s" podCreationTimestamp="2026-03-14 05:50:19 +0000 UTC" firstStartedPulling="2026-03-14 05:50:40.682541319 +0000 UTC m=+1094.720802055" lastFinishedPulling="2026-03-14 05:50:48.479654938 +0000 UTC m=+1102.517915684" observedRunningTime="2026-03-14 05:50:49.366646778 +0000 UTC m=+1103.404907524" watchObservedRunningTime="2026-03-14 05:50:49.375482524 +0000 UTC m=+1103.413743270" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.412959 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.896226805 podStartE2EDuration="38.41293918s" podCreationTimestamp="2026-03-14 05:50:11 +0000 UTC" firstStartedPulling="2026-03-14 05:50:13.457065664 +0000 UTC m=+1067.495326410" lastFinishedPulling="2026-03-14 05:50:39.973778039 +0000 UTC m=+1094.012038785" observedRunningTime="2026-03-14 05:50:49.409536972 +0000 UTC m=+1103.447797718" watchObservedRunningTime="2026-03-14 05:50:49.41293918 +0000 UTC m=+1103.451199936" Mar 14 05:50:49 crc kubenswrapper[4817]: I0314 05:50:49.436494 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=37.436474833 podStartE2EDuration="37.436474833s" podCreationTimestamp="2026-03-14 05:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:50:49.43290447 +0000 UTC m=+1103.471165226" watchObservedRunningTime="2026-03-14 05:50:49.436474833 +0000 UTC m=+1103.474735579" Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.330182 4817 generic.go:334] "Generic (PLEG): container finished" podID="26e76627-0df6-400a-981f-8672983e6741" containerID="b90ae27da11aa98bc4fa842ac1f17a38e7e333eb595a2701ba5ed0703900b5fe" exitCode=0 Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.330282 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jwq5b" event={"ID":"26e76627-0df6-400a-981f-8672983e6741","Type":"ContainerDied","Data":"b90ae27da11aa98bc4fa842ac1f17a38e7e333eb595a2701ba5ed0703900b5fe"} Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.333444 4817 generic.go:334] "Generic (PLEG): container finished" podID="900c734a-f840-4877-98fd-ff1415d6ad18" containerID="59d643eac65bd72d6fe6ea025a635f634d4e7a4944e1f0273ca6e574e436c78f" exitCode=0 Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.333488 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rj9cw" event={"ID":"900c734a-f840-4877-98fd-ff1415d6ad18","Type":"ContainerDied","Data":"59d643eac65bd72d6fe6ea025a635f634d4e7a4944e1f0273ca6e574e436c78f"} Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.336741 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa3ffb28-ad8e-4691-a5ff-ae17d083a019","Type":"ContainerStarted","Data":"c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e"} Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.340031 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f20e21c5-3d26-4494-a4d7-43323e059f31","Type":"ContainerStarted","Data":"52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997"} Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.350378 4817 generic.go:334] "Generic (PLEG): container finished" podID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerID="05a3af1820e2d5d28bc5c03c5715ddc3aa51ce64cd614f359840fa6ee26c74b9" exitCode=0 Mar 14 05:50:50 crc kubenswrapper[4817]: I0314 05:50:50.350444 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" event={"ID":"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c","Type":"ContainerDied","Data":"05a3af1820e2d5d28bc5c03c5715ddc3aa51ce64cd614f359840fa6ee26c74b9"} Mar 14 05:50:52 crc kubenswrapper[4817]: I0314 05:50:52.743854 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 14 05:50:52 crc kubenswrapper[4817]: I0314 05:50:52.743919 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 14 05:50:53 crc kubenswrapper[4817]: I0314 05:50:53.377842 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jwq5b" event={"ID":"26e76627-0df6-400a-981f-8672983e6741","Type":"ContainerStarted","Data":"d1d90b32ea6f4232313b9c1dc5186a0547158cfd45a9d7d611729b5e27383c0c"} Mar 14 05:50:53 crc kubenswrapper[4817]: I0314 05:50:53.381253 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rj9cw" event={"ID":"900c734a-f840-4877-98fd-ff1415d6ad18","Type":"ContainerStarted","Data":"0462e6066ff554140fd8dbaa6f7025dfce707df008108057564a6282f8e66ea5"} Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.297813 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.331955 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.334083 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.415811 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rj9cw" event={"ID":"900c734a-f840-4877-98fd-ff1415d6ad18","Type":"ContainerStarted","Data":"9560ce621b902e46e857671e1f1a6ab3b6fa5c5a4bb3a449df56f7b9cddfb7d5"} Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.416133 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.416149 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.425675 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" event={"ID":"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c","Type":"ContainerStarted","Data":"b56e14abf3df5b46ab2159c519e6c44be5611e33ad35c78f2f8b2e35eafd3c69"} Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.426233 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.426314 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.451756 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rj9cw" podStartSLOduration=29.622553079 podStartE2EDuration="35.451736651s" podCreationTimestamp="2026-03-14 05:50:19 +0000 UTC" firstStartedPulling="2026-03-14 05:50:42.107040372 +0000 UTC m=+1096.145301118" lastFinishedPulling="2026-03-14 05:50:47.936223954 +0000 UTC m=+1101.974484690" observedRunningTime="2026-03-14 05:50:54.44048164 +0000 UTC m=+1108.478742406" watchObservedRunningTime="2026-03-14 05:50:54.451736651 +0000 UTC m=+1108.489997397" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.470061 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" podStartSLOduration=10.667386477 podStartE2EDuration="11.469828205s" podCreationTimestamp="2026-03-14 05:50:43 +0000 UTC" firstStartedPulling="2026-03-14 05:50:48.522768918 +0000 UTC m=+1102.561029664" lastFinishedPulling="2026-03-14 05:50:49.325210646 +0000 UTC m=+1103.363471392" observedRunningTime="2026-03-14 05:50:54.463523306 +0000 UTC m=+1108.501784052" watchObservedRunningTime="2026-03-14 05:50:54.469828205 +0000 UTC m=+1108.508088951" Mar 14 05:50:54 crc kubenswrapper[4817]: I0314 05:50:54.483690 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-jwq5b" podStartSLOduration=10.665591328 podStartE2EDuration="11.483669969s" podCreationTimestamp="2026-03-14 05:50:43 +0000 UTC" firstStartedPulling="2026-03-14 05:50:48.507745802 +0000 UTC m=+1102.546006548" lastFinishedPulling="2026-03-14 05:50:49.325823983 +0000 UTC m=+1103.364085189" observedRunningTime="2026-03-14 05:50:54.480470448 +0000 UTC m=+1108.518731194" watchObservedRunningTime="2026-03-14 05:50:54.483669969 +0000 UTC m=+1108.521930715" Mar 14 05:50:55 crc kubenswrapper[4817]: I0314 05:50:55.436029 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-944q4" event={"ID":"e515005b-34d6-46cb-8486-cf2e09877f9d","Type":"ContainerStarted","Data":"960da121fa55f45b1c9571c15220cb7e2cca29c30a83f434fa4f1e892dad8e22"} Mar 14 05:50:55 crc kubenswrapper[4817]: I0314 05:50:55.440853 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"523f27ff-3994-4742-af55-15befc50017e","Type":"ContainerStarted","Data":"26cb5a3884cf7874a65dda53830985ed0bb603ec4c8c4c498cc9c2dba576030a"} Mar 14 05:50:55 crc kubenswrapper[4817]: I0314 05:50:55.478267 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-944q4" podStartSLOduration=7.176606826 podStartE2EDuration="12.478240427s" podCreationTimestamp="2026-03-14 05:50:43 +0000 UTC" firstStartedPulling="2026-03-14 05:50:48.497436743 +0000 UTC m=+1102.535697489" lastFinishedPulling="2026-03-14 05:50:53.799070344 +0000 UTC m=+1107.837331090" observedRunningTime="2026-03-14 05:50:55.460284376 +0000 UTC m=+1109.498545202" watchObservedRunningTime="2026-03-14 05:50:55.478240427 +0000 UTC m=+1109.516501193" Mar 14 05:50:55 crc kubenswrapper[4817]: I0314 05:50:55.489639 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.938554494 podStartE2EDuration="36.489623531s" podCreationTimestamp="2026-03-14 05:50:19 +0000 UTC" firstStartedPulling="2026-03-14 05:50:40.784977751 +0000 UTC m=+1094.823238497" lastFinishedPulling="2026-03-14 05:50:54.336046748 +0000 UTC m=+1108.374307534" observedRunningTime="2026-03-14 05:50:55.48503614 +0000 UTC m=+1109.523296896" watchObservedRunningTime="2026-03-14 05:50:55.489623531 +0000 UTC m=+1109.527884297" Mar 14 05:50:56 crc kubenswrapper[4817]: I0314 05:50:56.009749 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:56 crc kubenswrapper[4817]: I0314 05:50:56.453383 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5d70fa19-d760-4bb0-b182-c1bcf7797f96","Type":"ContainerStarted","Data":"3badd9a432631b689de165a9fc170641dc1ebe8bb000170bfa0382a129c91409"} Mar 14 05:50:56 crc kubenswrapper[4817]: I0314 05:50:56.476579 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.216598719 podStartE2EDuration="35.476554071s" podCreationTimestamp="2026-03-14 05:50:21 +0000 UTC" firstStartedPulling="2026-03-14 05:50:40.897063962 +0000 UTC m=+1094.935324708" lastFinishedPulling="2026-03-14 05:50:55.157019314 +0000 UTC m=+1109.195280060" observedRunningTime="2026-03-14 05:50:56.471856577 +0000 UTC m=+1110.510117333" watchObservedRunningTime="2026-03-14 05:50:56.476554071 +0000 UTC m=+1110.514814827" Mar 14 05:50:56 crc kubenswrapper[4817]: I0314 05:50:56.576629 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 14 05:50:57 crc kubenswrapper[4817]: I0314 05:50:57.009107 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:57 crc kubenswrapper[4817]: I0314 05:50:57.053238 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:57 crc kubenswrapper[4817]: I0314 05:50:57.506543 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 14 05:50:58 crc kubenswrapper[4817]: I0314 05:50:58.186081 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:58 crc kubenswrapper[4817]: I0314 05:50:58.720195 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:50:58 crc kubenswrapper[4817]: I0314 05:50:58.817687 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 14 05:50:58 crc kubenswrapper[4817]: I0314 05:50:58.937152 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.019164 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2bk5q"] Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.031489 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="099647fc-5cd6-4547-9400-8df4b6016b50" containerName="galera" probeResult="failure" output=< Mar 14 05:50:59 crc kubenswrapper[4817]: wsrep_local_state_comment (Joined) differs from Synced Mar 14 05:50:59 crc kubenswrapper[4817]: > Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.186545 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.229653 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.474285 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="dnsmasq-dns" containerID="cri-o://b56e14abf3df5b46ab2159c519e6c44be5611e33ad35c78f2f8b2e35eafd3c69" gracePeriod=10 Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.517431 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.676632 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.677859 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.681768 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.682052 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9h4mn" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.682205 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.684305 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.692645 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.801309 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.801844 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/094c63b8-a153-4ae4-90a5-d65b5718abd1-config\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.801876 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/094c63b8-a153-4ae4-90a5-d65b5718abd1-scripts\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.801973 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.802034 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5s99\" (UniqueName: \"kubernetes.io/projected/094c63b8-a153-4ae4-90a5-d65b5718abd1-kube-api-access-t5s99\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.802079 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/094c63b8-a153-4ae4-90a5-d65b5718abd1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.802118 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903072 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/094c63b8-a153-4ae4-90a5-d65b5718abd1-config\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903120 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/094c63b8-a153-4ae4-90a5-d65b5718abd1-scripts\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903183 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903213 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5s99\" (UniqueName: \"kubernetes.io/projected/094c63b8-a153-4ae4-90a5-d65b5718abd1-kube-api-access-t5s99\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903231 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/094c63b8-a153-4ae4-90a5-d65b5718abd1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903258 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903290 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.903814 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/094c63b8-a153-4ae4-90a5-d65b5718abd1-config\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.904849 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/094c63b8-a153-4ae4-90a5-d65b5718abd1-scripts\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.904862 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/094c63b8-a153-4ae4-90a5-d65b5718abd1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.909640 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.909980 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.911068 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094c63b8-a153-4ae4-90a5-d65b5718abd1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:50:59 crc kubenswrapper[4817]: I0314 05:50:59.926113 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5s99\" (UniqueName: \"kubernetes.io/projected/094c63b8-a153-4ae4-90a5-d65b5718abd1-kube-api-access-t5s99\") pod \"ovn-northd-0\" (UID: \"094c63b8-a153-4ae4-90a5-d65b5718abd1\") " pod="openstack/ovn-northd-0" Mar 14 05:51:00 crc kubenswrapper[4817]: I0314 05:51:00.044055 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 14 05:51:00 crc kubenswrapper[4817]: I0314 05:51:00.485761 4817 generic.go:334] "Generic (PLEG): container finished" podID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerID="b56e14abf3df5b46ab2159c519e6c44be5611e33ad35c78f2f8b2e35eafd3c69" exitCode=0 Mar 14 05:51:00 crc kubenswrapper[4817]: I0314 05:51:00.485856 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" event={"ID":"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c","Type":"ContainerDied","Data":"b56e14abf3df5b46ab2159c519e6c44be5611e33ad35c78f2f8b2e35eafd3c69"} Mar 14 05:51:00 crc kubenswrapper[4817]: I0314 05:51:00.486998 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 14 05:51:00 crc kubenswrapper[4817]: W0314 05:51:00.494993 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod094c63b8_a153_4ae4_90a5_d65b5718abd1.slice/crio-87bc73eda475c60f72aa386af408b131eb93c35f71a2090bae45901b4b2fead9 WatchSource:0}: Error finding container 87bc73eda475c60f72aa386af408b131eb93c35f71a2090bae45901b4b2fead9: Status 404 returned error can't find the container with id 87bc73eda475c60f72aa386af408b131eb93c35f71a2090bae45901b4b2fead9 Mar 14 05:51:01 crc kubenswrapper[4817]: I0314 05:51:01.494813 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"094c63b8-a153-4ae4-90a5-d65b5718abd1","Type":"ContainerStarted","Data":"87bc73eda475c60f72aa386af408b131eb93c35f71a2090bae45901b4b2fead9"} Mar 14 05:51:02 crc kubenswrapper[4817]: I0314 05:51:02.825312 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 14 05:51:03 crc kubenswrapper[4817]: I0314 05:51:03.042199 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 14 05:51:03 crc kubenswrapper[4817]: I0314 05:51:03.145363 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="29e96de0-75d9-4da0-a41e-3b93a7274083" containerName="galera" probeResult="failure" output=< Mar 14 05:51:03 crc kubenswrapper[4817]: wsrep_local_state_comment (Joined) differs from Synced Mar 14 05:51:03 crc kubenswrapper[4817]: > Mar 14 05:51:03 crc kubenswrapper[4817]: I0314 05:51:03.718646 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.410489 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.585344 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fgpxh"] Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.586927 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.598495 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fgpxh"] Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.610584 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4b54-account-create-update-xnzhj"] Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.611687 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.613992 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.626373 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4b54-account-create-update-xnzhj"] Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.687447 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8vs\" (UniqueName: \"kubernetes.io/projected/6db5d912-5a84-436a-a35c-82d4af03f6e5-kube-api-access-gl8vs\") pod \"glance-db-create-fgpxh\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.687498 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv52s\" (UniqueName: \"kubernetes.io/projected/2900aee5-7edc-459c-a5ef-18b08d486d32-kube-api-access-mv52s\") pod \"glance-4b54-account-create-update-xnzhj\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.687521 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6db5d912-5a84-436a-a35c-82d4af03f6e5-operator-scripts\") pod \"glance-db-create-fgpxh\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.687599 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2900aee5-7edc-459c-a5ef-18b08d486d32-operator-scripts\") pod \"glance-4b54-account-create-update-xnzhj\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.789035 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2900aee5-7edc-459c-a5ef-18b08d486d32-operator-scripts\") pod \"glance-4b54-account-create-update-xnzhj\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.789177 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8vs\" (UniqueName: \"kubernetes.io/projected/6db5d912-5a84-436a-a35c-82d4af03f6e5-kube-api-access-gl8vs\") pod \"glance-db-create-fgpxh\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.789210 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv52s\" (UniqueName: \"kubernetes.io/projected/2900aee5-7edc-459c-a5ef-18b08d486d32-kube-api-access-mv52s\") pod \"glance-4b54-account-create-update-xnzhj\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.789234 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6db5d912-5a84-436a-a35c-82d4af03f6e5-operator-scripts\") pod \"glance-db-create-fgpxh\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.790522 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2900aee5-7edc-459c-a5ef-18b08d486d32-operator-scripts\") pod \"glance-4b54-account-create-update-xnzhj\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.790974 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6db5d912-5a84-436a-a35c-82d4af03f6e5-operator-scripts\") pod \"glance-db-create-fgpxh\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.807350 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8vs\" (UniqueName: \"kubernetes.io/projected/6db5d912-5a84-436a-a35c-82d4af03f6e5-kube-api-access-gl8vs\") pod \"glance-db-create-fgpxh\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.807651 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv52s\" (UniqueName: \"kubernetes.io/projected/2900aee5-7edc-459c-a5ef-18b08d486d32-kube-api-access-mv52s\") pod \"glance-4b54-account-create-update-xnzhj\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.909005 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:04 crc kubenswrapper[4817]: I0314 05:51:04.928021 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.319842 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gbmkl"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.322741 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.332528 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gbmkl"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.364759 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4b54-account-create-update-xnzhj"] Mar 14 05:51:05 crc kubenswrapper[4817]: W0314 05:51:05.369488 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2900aee5_7edc_459c_a5ef_18b08d486d32.slice/crio-23bdae16b64d64d6bb5181079e709a3b5d49675ac505cb330b0fbd1483a6fb03 WatchSource:0}: Error finding container 23bdae16b64d64d6bb5181079e709a3b5d49675ac505cb330b0fbd1483a6fb03: Status 404 returned error can't find the container with id 23bdae16b64d64d6bb5181079e709a3b5d49675ac505cb330b0fbd1483a6fb03 Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.406983 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qrs\" (UniqueName: \"kubernetes.io/projected/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-kube-api-access-64qrs\") pod \"keystone-db-create-gbmkl\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.407090 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-operator-scripts\") pod \"keystone-db-create-gbmkl\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.431883 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fgpxh"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.442078 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-18c9-account-create-update-kdzmx"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.443934 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.445682 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 14 05:51:05 crc kubenswrapper[4817]: W0314 05:51:05.447417 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6db5d912_5a84_436a_a35c_82d4af03f6e5.slice/crio-89ce2d891378fc75ee6a4beae5d79b32cadd409ad991a57a2ecfefe7052e7a48 WatchSource:0}: Error finding container 89ce2d891378fc75ee6a4beae5d79b32cadd409ad991a57a2ecfefe7052e7a48: Status 404 returned error can't find the container with id 89ce2d891378fc75ee6a4beae5d79b32cadd409ad991a57a2ecfefe7052e7a48 Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.460458 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-18c9-account-create-update-kdzmx"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.509093 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-operator-scripts\") pod \"keystone-db-create-gbmkl\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.509211 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdknz\" (UniqueName: \"kubernetes.io/projected/55f26155-aa88-49fa-a106-542d5b2e0cbb-kube-api-access-tdknz\") pod \"keystone-18c9-account-create-update-kdzmx\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.509255 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55f26155-aa88-49fa-a106-542d5b2e0cbb-operator-scripts\") pod \"keystone-18c9-account-create-update-kdzmx\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.509313 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qrs\" (UniqueName: \"kubernetes.io/projected/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-kube-api-access-64qrs\") pod \"keystone-db-create-gbmkl\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.510090 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-operator-scripts\") pod \"keystone-db-create-gbmkl\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.526352 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fgpxh" event={"ID":"6db5d912-5a84-436a-a35c-82d4af03f6e5","Type":"ContainerStarted","Data":"89ce2d891378fc75ee6a4beae5d79b32cadd409ad991a57a2ecfefe7052e7a48"} Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.527512 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b54-account-create-update-xnzhj" event={"ID":"2900aee5-7edc-459c-a5ef-18b08d486d32","Type":"ContainerStarted","Data":"23bdae16b64d64d6bb5181079e709a3b5d49675ac505cb330b0fbd1483a6fb03"} Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.529544 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qrs\" (UniqueName: \"kubernetes.io/projected/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-kube-api-access-64qrs\") pod \"keystone-db-create-gbmkl\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.617991 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdknz\" (UniqueName: \"kubernetes.io/projected/55f26155-aa88-49fa-a106-542d5b2e0cbb-kube-api-access-tdknz\") pod \"keystone-18c9-account-create-update-kdzmx\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.618087 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55f26155-aa88-49fa-a106-542d5b2e0cbb-operator-scripts\") pod \"keystone-18c9-account-create-update-kdzmx\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.618862 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55f26155-aa88-49fa-a106-542d5b2e0cbb-operator-scripts\") pod \"keystone-18c9-account-create-update-kdzmx\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.624741 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-99d9s"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.625630 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.638516 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdknz\" (UniqueName: \"kubernetes.io/projected/55f26155-aa88-49fa-a106-542d5b2e0cbb-kube-api-access-tdknz\") pod \"keystone-18c9-account-create-update-kdzmx\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.638580 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-99d9s"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.645173 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.719994 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4hj\" (UniqueName: \"kubernetes.io/projected/d078d0d8-b1c0-45c0-b578-772b6bb6350c-kube-api-access-9j4hj\") pod \"placement-db-create-99d9s\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.720121 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d078d0d8-b1c0-45c0-b578-772b6bb6350c-operator-scripts\") pod \"placement-db-create-99d9s\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.750571 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-040f-account-create-update-4vdv7"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.751769 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.753964 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.761791 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.762027 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-040f-account-create-update-4vdv7"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.821552 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjg9k\" (UniqueName: \"kubernetes.io/projected/0be6610c-71c9-4f99-8b48-ff76fd651942-kube-api-access-vjg9k\") pod \"placement-040f-account-create-update-4vdv7\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.822467 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d078d0d8-b1c0-45c0-b578-772b6bb6350c-operator-scripts\") pod \"placement-db-create-99d9s\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.822690 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4hj\" (UniqueName: \"kubernetes.io/projected/d078d0d8-b1c0-45c0-b578-772b6bb6350c-kube-api-access-9j4hj\") pod \"placement-db-create-99d9s\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.822726 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be6610c-71c9-4f99-8b48-ff76fd651942-operator-scripts\") pod \"placement-040f-account-create-update-4vdv7\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.823973 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d078d0d8-b1c0-45c0-b578-772b6bb6350c-operator-scripts\") pod \"placement-db-create-99d9s\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.845378 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4hj\" (UniqueName: \"kubernetes.io/projected/d078d0d8-b1c0-45c0-b578-772b6bb6350c-kube-api-access-9j4hj\") pod \"placement-db-create-99d9s\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.901167 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gbmkl"] Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.926021 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjg9k\" (UniqueName: \"kubernetes.io/projected/0be6610c-71c9-4f99-8b48-ff76fd651942-kube-api-access-vjg9k\") pod \"placement-040f-account-create-update-4vdv7\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.926146 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be6610c-71c9-4f99-8b48-ff76fd651942-operator-scripts\") pod \"placement-040f-account-create-update-4vdv7\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.927157 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be6610c-71c9-4f99-8b48-ff76fd651942-operator-scripts\") pod \"placement-040f-account-create-update-4vdv7\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.944495 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-99d9s" Mar 14 05:51:05 crc kubenswrapper[4817]: I0314 05:51:05.970340 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjg9k\" (UniqueName: \"kubernetes.io/projected/0be6610c-71c9-4f99-8b48-ff76fd651942-kube-api-access-vjg9k\") pod \"placement-040f-account-create-update-4vdv7\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.078679 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.236862 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-18c9-account-create-update-kdzmx"] Mar 14 05:51:06 crc kubenswrapper[4817]: W0314 05:51:06.239994 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f26155_aa88_49fa_a106_542d5b2e0cbb.slice/crio-adc18f502d3d96345c3ed26ffd2774ad2338c53a7ec59339bfc34734ed89c314 WatchSource:0}: Error finding container adc18f502d3d96345c3ed26ffd2774ad2338c53a7ec59339bfc34734ed89c314: Status 404 returned error can't find the container with id adc18f502d3d96345c3ed26ffd2774ad2338c53a7ec59339bfc34734ed89c314 Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.426577 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-99d9s"] Mar 14 05:51:06 crc kubenswrapper[4817]: W0314 05:51:06.427672 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd078d0d8_b1c0_45c0_b578_772b6bb6350c.slice/crio-5592ad7ce590e246aedd73bb02f3c03bc7c2ef059efd7e3e78a9c8f6412e65c6 WatchSource:0}: Error finding container 5592ad7ce590e246aedd73bb02f3c03bc7c2ef059efd7e3e78a9c8f6412e65c6: Status 404 returned error can't find the container with id 5592ad7ce590e246aedd73bb02f3c03bc7c2ef059efd7e3e78a9c8f6412e65c6 Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.539714 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-99d9s" event={"ID":"d078d0d8-b1c0-45c0-b578-772b6bb6350c","Type":"ContainerStarted","Data":"5592ad7ce590e246aedd73bb02f3c03bc7c2ef059efd7e3e78a9c8f6412e65c6"} Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.541092 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gbmkl" event={"ID":"1b1b7f4d-c285-477d-87aa-2d2b3be6053d","Type":"ContainerStarted","Data":"a298ac22b49a2eea9cc32e91046ad28c9c16534893ce22f6463684da51ca6e65"} Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.541975 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c9-account-create-update-kdzmx" event={"ID":"55f26155-aa88-49fa-a106-542d5b2e0cbb","Type":"ContainerStarted","Data":"adc18f502d3d96345c3ed26ffd2774ad2338c53a7ec59339bfc34734ed89c314"} Mar 14 05:51:06 crc kubenswrapper[4817]: I0314 05:51:06.542324 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-040f-account-create-update-4vdv7"] Mar 14 05:51:06 crc kubenswrapper[4817]: W0314 05:51:06.543843 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be6610c_71c9_4f99_8b48_ff76fd651942.slice/crio-38d7de2822fbbf5984bf4d91facea37d48cd5b8d55c148f7587f314bdd0d48fb WatchSource:0}: Error finding container 38d7de2822fbbf5984bf4d91facea37d48cd5b8d55c148f7587f314bdd0d48fb: Status 404 returned error can't find the container with id 38d7de2822fbbf5984bf4d91facea37d48cd5b8d55c148f7587f314bdd0d48fb Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.565567 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-040f-account-create-update-4vdv7" event={"ID":"0be6610c-71c9-4f99-8b48-ff76fd651942","Type":"ContainerStarted","Data":"bd298c7ff71b39f1828c09c2c876b02a5f5c00a037eaaf83d563e40280ca952a"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.565968 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-040f-account-create-update-4vdv7" event={"ID":"0be6610c-71c9-4f99-8b48-ff76fd651942","Type":"ContainerStarted","Data":"38d7de2822fbbf5984bf4d91facea37d48cd5b8d55c148f7587f314bdd0d48fb"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.576391 4817 generic.go:334] "Generic (PLEG): container finished" podID="1b1b7f4d-c285-477d-87aa-2d2b3be6053d" containerID="5389ddca85e27077ddc0d56774f08f83d3b92a3e993de7976097a33d774fa1de" exitCode=0 Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.576461 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gbmkl" event={"ID":"1b1b7f4d-c285-477d-87aa-2d2b3be6053d","Type":"ContainerDied","Data":"5389ddca85e27077ddc0d56774f08f83d3b92a3e993de7976097a33d774fa1de"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.577968 4817 generic.go:334] "Generic (PLEG): container finished" podID="55f26155-aa88-49fa-a106-542d5b2e0cbb" containerID="d788d696236a365aa9c20c9edb2084f19339d4599e97d1748ebba798b5a8b3e6" exitCode=0 Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.578029 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c9-account-create-update-kdzmx" event={"ID":"55f26155-aa88-49fa-a106-542d5b2e0cbb","Type":"ContainerDied","Data":"d788d696236a365aa9c20c9edb2084f19339d4599e97d1748ebba798b5a8b3e6"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.581693 4817 generic.go:334] "Generic (PLEG): container finished" podID="2900aee5-7edc-459c-a5ef-18b08d486d32" containerID="df93c48553a8c805e604e276073023d87222e8762e527129e8b005ee18d65e04" exitCode=0 Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.581745 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b54-account-create-update-xnzhj" event={"ID":"2900aee5-7edc-459c-a5ef-18b08d486d32","Type":"ContainerDied","Data":"df93c48553a8c805e604e276073023d87222e8762e527129e8b005ee18d65e04"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.582479 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-040f-account-create-update-4vdv7" podStartSLOduration=2.582461574 podStartE2EDuration="2.582461574s" podCreationTimestamp="2026-03-14 05:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:07.580075796 +0000 UTC m=+1121.618336552" watchObservedRunningTime="2026-03-14 05:51:07.582461574 +0000 UTC m=+1121.620722320" Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.587710 4817 generic.go:334] "Generic (PLEG): container finished" podID="6db5d912-5a84-436a-a35c-82d4af03f6e5" containerID="64bb8de80cfda8b9b7c8f28e9f5e5d982bad7ca9a3e525abd87dbd41ca70a902" exitCode=0 Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.587746 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fgpxh" event={"ID":"6db5d912-5a84-436a-a35c-82d4af03f6e5","Type":"ContainerDied","Data":"64bb8de80cfda8b9b7c8f28e9f5e5d982bad7ca9a3e525abd87dbd41ca70a902"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.589794 4817 generic.go:334] "Generic (PLEG): container finished" podID="d078d0d8-b1c0-45c0-b578-772b6bb6350c" containerID="30c1861aa9d526d0408f342cdf0148e81a751197ef24a6eaa174ce5d5a6d2f39" exitCode=0 Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.589843 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-99d9s" event={"ID":"d078d0d8-b1c0-45c0-b578-772b6bb6350c","Type":"ContainerDied","Data":"30c1861aa9d526d0408f342cdf0148e81a751197ef24a6eaa174ce5d5a6d2f39"} Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.835190 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.956705 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq97c\" (UniqueName: \"kubernetes.io/projected/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-kube-api-access-nq97c\") pod \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.956813 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-config\") pod \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.956912 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-ovsdbserver-sb\") pod \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.956996 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-dns-svc\") pod \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\" (UID: \"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c\") " Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.971222 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-kube-api-access-nq97c" (OuterVolumeSpecName: "kube-api-access-nq97c") pod "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" (UID: "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c"). InnerVolumeSpecName "kube-api-access-nq97c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:07 crc kubenswrapper[4817]: I0314 05:51:07.997492 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" (UID: "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.001291 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" (UID: "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.002000 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-config" (OuterVolumeSpecName: "config") pod "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" (UID: "2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.059390 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq97c\" (UniqueName: \"kubernetes.io/projected/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-kube-api-access-nq97c\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.059429 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.059439 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.059448 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.565883 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.566315 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.566363 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.566939 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"410879e5dd288fd6afed9ea2c23e57c34cba5d0fba30b068075ef7767158e5fe"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.567012 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://410879e5dd288fd6afed9ea2c23e57c34cba5d0fba30b068075ef7767158e5fe" gracePeriod=600 Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.613779 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"094c63b8-a153-4ae4-90a5-d65b5718abd1","Type":"ContainerStarted","Data":"87e704f3cd18b234104c6cc1abcdd62cb34078d98ab14ae8761e4b12d65c8dab"} Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.613872 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"094c63b8-a153-4ae4-90a5-d65b5718abd1","Type":"ContainerStarted","Data":"f1261f517b65ca9508cf8ec046b116726794637ed62a16205ba85081c13a6a63"} Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.613924 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.617059 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" event={"ID":"2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c","Type":"ContainerDied","Data":"347758eef57cdf90a397b68e2e3988d6724dcce2dd9601bec316c2c2f49e1b36"} Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.617099 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-2bk5q" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.617133 4817 scope.go:117] "RemoveContainer" containerID="b56e14abf3df5b46ab2159c519e6c44be5611e33ad35c78f2f8b2e35eafd3c69" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.619444 4817 generic.go:334] "Generic (PLEG): container finished" podID="0be6610c-71c9-4f99-8b48-ff76fd651942" containerID="bd298c7ff71b39f1828c09c2c876b02a5f5c00a037eaaf83d563e40280ca952a" exitCode=0 Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.619542 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-040f-account-create-update-4vdv7" event={"ID":"0be6610c-71c9-4f99-8b48-ff76fd651942","Type":"ContainerDied","Data":"bd298c7ff71b39f1828c09c2c876b02a5f5c00a037eaaf83d563e40280ca952a"} Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.641181 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.959798412 podStartE2EDuration="9.641163955s" podCreationTimestamp="2026-03-14 05:50:59 +0000 UTC" firstStartedPulling="2026-03-14 05:51:00.497751002 +0000 UTC m=+1114.536011748" lastFinishedPulling="2026-03-14 05:51:08.179116555 +0000 UTC m=+1122.217377291" observedRunningTime="2026-03-14 05:51:08.629287197 +0000 UTC m=+1122.667547943" watchObservedRunningTime="2026-03-14 05:51:08.641163955 +0000 UTC m=+1122.679424701" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.647252 4817 scope.go:117] "RemoveContainer" containerID="05a3af1820e2d5d28bc5c03c5715ddc3aa51ce64cd614f359840fa6ee26c74b9" Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.672662 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2bk5q"] Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.680235 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-2bk5q"] Mar 14 05:51:08 crc kubenswrapper[4817]: I0314 05:51:08.742699 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" path="/var/lib/kubelet/pods/2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c/volumes" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.046273 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.182970 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6db5d912-5a84-436a-a35c-82d4af03f6e5-operator-scripts\") pod \"6db5d912-5a84-436a-a35c-82d4af03f6e5\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.183380 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8vs\" (UniqueName: \"kubernetes.io/projected/6db5d912-5a84-436a-a35c-82d4af03f6e5-kube-api-access-gl8vs\") pod \"6db5d912-5a84-436a-a35c-82d4af03f6e5\" (UID: \"6db5d912-5a84-436a-a35c-82d4af03f6e5\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.184394 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6db5d912-5a84-436a-a35c-82d4af03f6e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6db5d912-5a84-436a-a35c-82d4af03f6e5" (UID: "6db5d912-5a84-436a-a35c-82d4af03f6e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.202030 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db5d912-5a84-436a-a35c-82d4af03f6e5-kube-api-access-gl8vs" (OuterVolumeSpecName: "kube-api-access-gl8vs") pod "6db5d912-5a84-436a-a35c-82d4af03f6e5" (UID: "6db5d912-5a84-436a-a35c-82d4af03f6e5"). InnerVolumeSpecName "kube-api-access-gl8vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.285587 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6db5d912-5a84-436a-a35c-82d4af03f6e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.285651 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl8vs\" (UniqueName: \"kubernetes.io/projected/6db5d912-5a84-436a-a35c-82d4af03f6e5-kube-api-access-gl8vs\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.319046 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.325595 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.332091 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.340196 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-99d9s" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.387324 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qrs\" (UniqueName: \"kubernetes.io/projected/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-kube-api-access-64qrs\") pod \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.387422 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2900aee5-7edc-459c-a5ef-18b08d486d32-operator-scripts\") pod \"2900aee5-7edc-459c-a5ef-18b08d486d32\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.387490 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv52s\" (UniqueName: \"kubernetes.io/projected/2900aee5-7edc-459c-a5ef-18b08d486d32-kube-api-access-mv52s\") pod \"2900aee5-7edc-459c-a5ef-18b08d486d32\" (UID: \"2900aee5-7edc-459c-a5ef-18b08d486d32\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.387544 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-operator-scripts\") pod \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\" (UID: \"1b1b7f4d-c285-477d-87aa-2d2b3be6053d\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.388410 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b1b7f4d-c285-477d-87aa-2d2b3be6053d" (UID: "1b1b7f4d-c285-477d-87aa-2d2b3be6053d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.389174 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2900aee5-7edc-459c-a5ef-18b08d486d32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2900aee5-7edc-459c-a5ef-18b08d486d32" (UID: "2900aee5-7edc-459c-a5ef-18b08d486d32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.394267 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2900aee5-7edc-459c-a5ef-18b08d486d32-kube-api-access-mv52s" (OuterVolumeSpecName: "kube-api-access-mv52s") pod "2900aee5-7edc-459c-a5ef-18b08d486d32" (UID: "2900aee5-7edc-459c-a5ef-18b08d486d32"). InnerVolumeSpecName "kube-api-access-mv52s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.395184 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-kube-api-access-64qrs" (OuterVolumeSpecName: "kube-api-access-64qrs") pod "1b1b7f4d-c285-477d-87aa-2d2b3be6053d" (UID: "1b1b7f4d-c285-477d-87aa-2d2b3be6053d"). InnerVolumeSpecName "kube-api-access-64qrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.488663 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55f26155-aa88-49fa-a106-542d5b2e0cbb-operator-scripts\") pod \"55f26155-aa88-49fa-a106-542d5b2e0cbb\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.488771 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d078d0d8-b1c0-45c0-b578-772b6bb6350c-operator-scripts\") pod \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.488850 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4hj\" (UniqueName: \"kubernetes.io/projected/d078d0d8-b1c0-45c0-b578-772b6bb6350c-kube-api-access-9j4hj\") pod \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\" (UID: \"d078d0d8-b1c0-45c0-b578-772b6bb6350c\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.488935 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdknz\" (UniqueName: \"kubernetes.io/projected/55f26155-aa88-49fa-a106-542d5b2e0cbb-kube-api-access-tdknz\") pod \"55f26155-aa88-49fa-a106-542d5b2e0cbb\" (UID: \"55f26155-aa88-49fa-a106-542d5b2e0cbb\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.489263 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qrs\" (UniqueName: \"kubernetes.io/projected/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-kube-api-access-64qrs\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.489278 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2900aee5-7edc-459c-a5ef-18b08d486d32-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.489287 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv52s\" (UniqueName: \"kubernetes.io/projected/2900aee5-7edc-459c-a5ef-18b08d486d32-kube-api-access-mv52s\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.489296 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b1b7f4d-c285-477d-87aa-2d2b3be6053d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.489379 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d078d0d8-b1c0-45c0-b578-772b6bb6350c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d078d0d8-b1c0-45c0-b578-772b6bb6350c" (UID: "d078d0d8-b1c0-45c0-b578-772b6bb6350c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.489441 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f26155-aa88-49fa-a106-542d5b2e0cbb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55f26155-aa88-49fa-a106-542d5b2e0cbb" (UID: "55f26155-aa88-49fa-a106-542d5b2e0cbb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.491924 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d078d0d8-b1c0-45c0-b578-772b6bb6350c-kube-api-access-9j4hj" (OuterVolumeSpecName: "kube-api-access-9j4hj") pod "d078d0d8-b1c0-45c0-b578-772b6bb6350c" (UID: "d078d0d8-b1c0-45c0-b578-772b6bb6350c"). InnerVolumeSpecName "kube-api-access-9j4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.492725 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f26155-aa88-49fa-a106-542d5b2e0cbb-kube-api-access-tdknz" (OuterVolumeSpecName: "kube-api-access-tdknz") pod "55f26155-aa88-49fa-a106-542d5b2e0cbb" (UID: "55f26155-aa88-49fa-a106-542d5b2e0cbb"). InnerVolumeSpecName "kube-api-access-tdknz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.590328 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55f26155-aa88-49fa-a106-542d5b2e0cbb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.590371 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d078d0d8-b1c0-45c0-b578-772b6bb6350c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.590385 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4hj\" (UniqueName: \"kubernetes.io/projected/d078d0d8-b1c0-45c0-b578-772b6bb6350c-kube-api-access-9j4hj\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.590401 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdknz\" (UniqueName: \"kubernetes.io/projected/55f26155-aa88-49fa-a106-542d5b2e0cbb-kube-api-access-tdknz\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.629723 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b54-account-create-update-xnzhj" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.629712 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b54-account-create-update-xnzhj" event={"ID":"2900aee5-7edc-459c-a5ef-18b08d486d32","Type":"ContainerDied","Data":"23bdae16b64d64d6bb5181079e709a3b5d49675ac505cb330b0fbd1483a6fb03"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.629852 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23bdae16b64d64d6bb5181079e709a3b5d49675ac505cb330b0fbd1483a6fb03" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.634340 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fgpxh" event={"ID":"6db5d912-5a84-436a-a35c-82d4af03f6e5","Type":"ContainerDied","Data":"89ce2d891378fc75ee6a4beae5d79b32cadd409ad991a57a2ecfefe7052e7a48"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.634378 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ce2d891378fc75ee6a4beae5d79b32cadd409ad991a57a2ecfefe7052e7a48" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.634425 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fgpxh" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.636760 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-99d9s" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.636680 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-99d9s" event={"ID":"d078d0d8-b1c0-45c0-b578-772b6bb6350c","Type":"ContainerDied","Data":"5592ad7ce590e246aedd73bb02f3c03bc7c2ef059efd7e3e78a9c8f6412e65c6"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.637053 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5592ad7ce590e246aedd73bb02f3c03bc7c2ef059efd7e3e78a9c8f6412e65c6" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.639526 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="410879e5dd288fd6afed9ea2c23e57c34cba5d0fba30b068075ef7767158e5fe" exitCode=0 Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.639593 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"410879e5dd288fd6afed9ea2c23e57c34cba5d0fba30b068075ef7767158e5fe"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.639654 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"4cedc780ac8b4d762839f86c77e2e11ff9cb9f77222802713452641e56fdcbca"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.639675 4817 scope.go:117] "RemoveContainer" containerID="114f8ca4faca8cf630930433049d8d1045dc89bc450b7ac565ae6c778fa29990" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.642268 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gbmkl" event={"ID":"1b1b7f4d-c285-477d-87aa-2d2b3be6053d","Type":"ContainerDied","Data":"a298ac22b49a2eea9cc32e91046ad28c9c16534893ce22f6463684da51ca6e65"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.642295 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a298ac22b49a2eea9cc32e91046ad28c9c16534893ce22f6463684da51ca6e65" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.643170 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gbmkl" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.645816 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c9-account-create-update-kdzmx" event={"ID":"55f26155-aa88-49fa-a106-542d5b2e0cbb","Type":"ContainerDied","Data":"adc18f502d3d96345c3ed26ffd2774ad2338c53a7ec59339bfc34734ed89c314"} Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.645862 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc18f502d3d96345c3ed26ffd2774ad2338c53a7ec59339bfc34734ed89c314" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.645909 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c9-account-create-update-kdzmx" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.884475 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.995672 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be6610c-71c9-4f99-8b48-ff76fd651942-operator-scripts\") pod \"0be6610c-71c9-4f99-8b48-ff76fd651942\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.995844 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjg9k\" (UniqueName: \"kubernetes.io/projected/0be6610c-71c9-4f99-8b48-ff76fd651942-kube-api-access-vjg9k\") pod \"0be6610c-71c9-4f99-8b48-ff76fd651942\" (UID: \"0be6610c-71c9-4f99-8b48-ff76fd651942\") " Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.996221 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be6610c-71c9-4f99-8b48-ff76fd651942-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0be6610c-71c9-4f99-8b48-ff76fd651942" (UID: "0be6610c-71c9-4f99-8b48-ff76fd651942"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:09 crc kubenswrapper[4817]: I0314 05:51:09.996525 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be6610c-71c9-4f99-8b48-ff76fd651942-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:10 crc kubenswrapper[4817]: I0314 05:51:09.999950 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be6610c-71c9-4f99-8b48-ff76fd651942-kube-api-access-vjg9k" (OuterVolumeSpecName: "kube-api-access-vjg9k") pod "0be6610c-71c9-4f99-8b48-ff76fd651942" (UID: "0be6610c-71c9-4f99-8b48-ff76fd651942"). InnerVolumeSpecName "kube-api-access-vjg9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:10 crc kubenswrapper[4817]: I0314 05:51:10.098766 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjg9k\" (UniqueName: \"kubernetes.io/projected/0be6610c-71c9-4f99-8b48-ff76fd651942-kube-api-access-vjg9k\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:10 crc kubenswrapper[4817]: I0314 05:51:10.655152 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-040f-account-create-update-4vdv7" event={"ID":"0be6610c-71c9-4f99-8b48-ff76fd651942","Type":"ContainerDied","Data":"38d7de2822fbbf5984bf4d91facea37d48cd5b8d55c148f7587f314bdd0d48fb"} Mar 14 05:51:10 crc kubenswrapper[4817]: I0314 05:51:10.655527 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d7de2822fbbf5984bf4d91facea37d48cd5b8d55c148f7587f314bdd0d48fb" Mar 14 05:51:10 crc kubenswrapper[4817]: I0314 05:51:10.655200 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-040f-account-create-update-4vdv7" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.370279 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2jl27"] Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371088 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d078d0d8-b1c0-45c0-b578-772b6bb6350c" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371109 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d078d0d8-b1c0-45c0-b578-772b6bb6350c" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371134 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="dnsmasq-dns" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371143 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="dnsmasq-dns" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371179 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1b7f4d-c285-477d-87aa-2d2b3be6053d" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371189 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1b7f4d-c285-477d-87aa-2d2b3be6053d" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371226 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be6610c-71c9-4f99-8b48-ff76fd651942" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371235 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be6610c-71c9-4f99-8b48-ff76fd651942" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371252 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="init" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371262 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="init" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371286 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db5d912-5a84-436a-a35c-82d4af03f6e5" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371296 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db5d912-5a84-436a-a35c-82d4af03f6e5" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371311 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2900aee5-7edc-459c-a5ef-18b08d486d32" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371320 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2900aee5-7edc-459c-a5ef-18b08d486d32" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: E0314 05:51:11.371345 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f26155-aa88-49fa-a106-542d5b2e0cbb" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371356 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f26155-aa88-49fa-a106-542d5b2e0cbb" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371587 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2925ed2d-4a13-4fc2-b62b-1bb73fd2f69c" containerName="dnsmasq-dns" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371604 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d078d0d8-b1c0-45c0-b578-772b6bb6350c" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371621 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be6610c-71c9-4f99-8b48-ff76fd651942" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371634 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2900aee5-7edc-459c-a5ef-18b08d486d32" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371653 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1b7f4d-c285-477d-87aa-2d2b3be6053d" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371676 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db5d912-5a84-436a-a35c-82d4af03f6e5" containerName="mariadb-database-create" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.371697 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f26155-aa88-49fa-a106-542d5b2e0cbb" containerName="mariadb-account-create-update" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.372518 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.377336 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.388652 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2jl27"] Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.430108 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9l6m\" (UniqueName: \"kubernetes.io/projected/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-kube-api-access-r9l6m\") pod \"root-account-create-update-2jl27\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.430188 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-operator-scripts\") pod \"root-account-create-update-2jl27\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.531678 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-operator-scripts\") pod \"root-account-create-update-2jl27\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.531842 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9l6m\" (UniqueName: \"kubernetes.io/projected/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-kube-api-access-r9l6m\") pod \"root-account-create-update-2jl27\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.532972 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-operator-scripts\") pod \"root-account-create-update-2jl27\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.549777 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9l6m\" (UniqueName: \"kubernetes.io/projected/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-kube-api-access-r9l6m\") pod \"root-account-create-update-2jl27\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:11 crc kubenswrapper[4817]: I0314 05:51:11.691147 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:12 crc kubenswrapper[4817]: I0314 05:51:12.108742 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2jl27"] Mar 14 05:51:12 crc kubenswrapper[4817]: I0314 05:51:12.683660 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2jl27" event={"ID":"cadf4e88-868f-4866-acb9-fcb9f4d4fd03","Type":"ContainerStarted","Data":"004fc33bcc3ca3b755b9d0a7ef51e4475285017a6406de6578362c3ab1fdce12"} Mar 14 05:51:12 crc kubenswrapper[4817]: I0314 05:51:12.683716 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2jl27" event={"ID":"cadf4e88-868f-4866-acb9-fcb9f4d4fd03","Type":"ContainerStarted","Data":"ac68d68742effc4c6db511086345ea0fe0608757055a7e92a4e33887eff9e3c5"} Mar 14 05:51:12 crc kubenswrapper[4817]: I0314 05:51:12.706265 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2jl27" podStartSLOduration=1.706243095 podStartE2EDuration="1.706243095s" podCreationTimestamp="2026-03-14 05:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:12.703523087 +0000 UTC m=+1126.741783863" watchObservedRunningTime="2026-03-14 05:51:12.706243095 +0000 UTC m=+1126.744503841" Mar 14 05:51:13 crc kubenswrapper[4817]: I0314 05:51:13.693807 4817 generic.go:334] "Generic (PLEG): container finished" podID="cadf4e88-868f-4866-acb9-fcb9f4d4fd03" containerID="004fc33bcc3ca3b755b9d0a7ef51e4475285017a6406de6578362c3ab1fdce12" exitCode=0 Mar 14 05:51:13 crc kubenswrapper[4817]: I0314 05:51:13.694037 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2jl27" event={"ID":"cadf4e88-868f-4866-acb9-fcb9f4d4fd03","Type":"ContainerDied","Data":"004fc33bcc3ca3b755b9d0a7ef51e4475285017a6406de6578362c3ab1fdce12"} Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.767356 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zzk9h"] Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.768971 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.773129 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pxpbs" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.773297 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.779995 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zzk9h"] Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.889545 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-config-data\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.889660 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltbh\" (UniqueName: \"kubernetes.io/projected/081334e9-833f-4a52-893e-29c7ac2241ac-kube-api-access-fltbh\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.889681 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-db-sync-config-data\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.889699 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-combined-ca-bundle\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.990689 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-config-data\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.990807 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltbh\" (UniqueName: \"kubernetes.io/projected/081334e9-833f-4a52-893e-29c7ac2241ac-kube-api-access-fltbh\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.990835 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-db-sync-config-data\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.990864 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-combined-ca-bundle\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.996386 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-db-sync-config-data\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.996510 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-combined-ca-bundle\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:14 crc kubenswrapper[4817]: I0314 05:51:14.996867 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-config-data\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.017302 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltbh\" (UniqueName: \"kubernetes.io/projected/081334e9-833f-4a52-893e-29c7ac2241ac-kube-api-access-fltbh\") pod \"glance-db-sync-zzk9h\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.077056 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.092695 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.193485 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9l6m\" (UniqueName: \"kubernetes.io/projected/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-kube-api-access-r9l6m\") pod \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.194053 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-operator-scripts\") pod \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\" (UID: \"cadf4e88-868f-4866-acb9-fcb9f4d4fd03\") " Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.194767 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cadf4e88-868f-4866-acb9-fcb9f4d4fd03" (UID: "cadf4e88-868f-4866-acb9-fcb9f4d4fd03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.196759 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-kube-api-access-r9l6m" (OuterVolumeSpecName: "kube-api-access-r9l6m") pod "cadf4e88-868f-4866-acb9-fcb9f4d4fd03" (UID: "cadf4e88-868f-4866-acb9-fcb9f4d4fd03"). InnerVolumeSpecName "kube-api-access-r9l6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.296545 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9l6m\" (UniqueName: \"kubernetes.io/projected/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-kube-api-access-r9l6m\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.296592 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cadf4e88-868f-4866-acb9-fcb9f4d4fd03-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.633881 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zzk9h"] Mar 14 05:51:15 crc kubenswrapper[4817]: W0314 05:51:15.639429 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod081334e9_833f_4a52_893e_29c7ac2241ac.slice/crio-e421a5ba8772e7bf75cf0a9780b452782bd5ba53f48936021d7fa135f878e2ad WatchSource:0}: Error finding container e421a5ba8772e7bf75cf0a9780b452782bd5ba53f48936021d7fa135f878e2ad: Status 404 returned error can't find the container with id e421a5ba8772e7bf75cf0a9780b452782bd5ba53f48936021d7fa135f878e2ad Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.716521 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zzk9h" event={"ID":"081334e9-833f-4a52-893e-29c7ac2241ac","Type":"ContainerStarted","Data":"e421a5ba8772e7bf75cf0a9780b452782bd5ba53f48936021d7fa135f878e2ad"} Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.717970 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2jl27" event={"ID":"cadf4e88-868f-4866-acb9-fcb9f4d4fd03","Type":"ContainerDied","Data":"ac68d68742effc4c6db511086345ea0fe0608757055a7e92a4e33887eff9e3c5"} Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.718008 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac68d68742effc4c6db511086345ea0fe0608757055a7e92a4e33887eff9e3c5" Mar 14 05:51:15 crc kubenswrapper[4817]: I0314 05:51:15.718173 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2jl27" Mar 14 05:51:17 crc kubenswrapper[4817]: I0314 05:51:17.690271 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2jl27"] Mar 14 05:51:17 crc kubenswrapper[4817]: I0314 05:51:17.696237 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2jl27"] Mar 14 05:51:18 crc kubenswrapper[4817]: I0314 05:51:18.744269 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadf4e88-868f-4866-acb9-fcb9f4d4fd03" path="/var/lib/kubelet/pods/cadf4e88-868f-4866-acb9-fcb9f4d4fd03/volumes" Mar 14 05:51:20 crc kubenswrapper[4817]: I0314 05:51:20.096949 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 14 05:51:20 crc kubenswrapper[4817]: I0314 05:51:20.177921 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nsn5q" podUID="9790d6d0-9013-42cf-bb3d-394f5fc292ba" containerName="ovn-controller" probeResult="failure" output=< Mar 14 05:51:20 crc kubenswrapper[4817]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 05:51:20 crc kubenswrapper[4817]: > Mar 14 05:51:21 crc kubenswrapper[4817]: I0314 05:51:21.760790 4817 generic.go:334] "Generic (PLEG): container finished" podID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerID="c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e" exitCode=0 Mar 14 05:51:21 crc kubenswrapper[4817]: I0314 05:51:21.760914 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa3ffb28-ad8e-4691-a5ff-ae17d083a019","Type":"ContainerDied","Data":"c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e"} Mar 14 05:51:21 crc kubenswrapper[4817]: I0314 05:51:21.763751 4817 generic.go:334] "Generic (PLEG): container finished" podID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerID="52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997" exitCode=0 Mar 14 05:51:21 crc kubenswrapper[4817]: I0314 05:51:21.763797 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f20e21c5-3d26-4494-a4d7-43323e059f31","Type":"ContainerDied","Data":"52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997"} Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.700234 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gp8wg"] Mar 14 05:51:22 crc kubenswrapper[4817]: E0314 05:51:22.700659 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadf4e88-868f-4866-acb9-fcb9f4d4fd03" containerName="mariadb-account-create-update" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.700683 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadf4e88-868f-4866-acb9-fcb9f4d4fd03" containerName="mariadb-account-create-update" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.700885 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadf4e88-868f-4866-acb9-fcb9f4d4fd03" containerName="mariadb-account-create-update" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.701704 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.706608 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.722644 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gp8wg"] Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.842946 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d16d505-b626-4d62-8688-688edc7182c2-operator-scripts\") pod \"root-account-create-update-gp8wg\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.843335 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkw9\" (UniqueName: \"kubernetes.io/projected/7d16d505-b626-4d62-8688-688edc7182c2-kube-api-access-qgkw9\") pod \"root-account-create-update-gp8wg\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.945217 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d16d505-b626-4d62-8688-688edc7182c2-operator-scripts\") pod \"root-account-create-update-gp8wg\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.945285 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkw9\" (UniqueName: \"kubernetes.io/projected/7d16d505-b626-4d62-8688-688edc7182c2-kube-api-access-qgkw9\") pod \"root-account-create-update-gp8wg\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.947494 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d16d505-b626-4d62-8688-688edc7182c2-operator-scripts\") pod \"root-account-create-update-gp8wg\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:22 crc kubenswrapper[4817]: I0314 05:51:22.964720 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkw9\" (UniqueName: \"kubernetes.io/projected/7d16d505-b626-4d62-8688-688edc7182c2-kube-api-access-qgkw9\") pod \"root-account-create-update-gp8wg\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:23 crc kubenswrapper[4817]: I0314 05:51:23.019198 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.179925 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nsn5q" podUID="9790d6d0-9013-42cf-bb3d-394f5fc292ba" containerName="ovn-controller" probeResult="failure" output=< Mar 14 05:51:25 crc kubenswrapper[4817]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 05:51:25 crc kubenswrapper[4817]: > Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.201960 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.204296 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rj9cw" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.436726 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nsn5q-config-lmtvg"] Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.438704 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.441581 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.461403 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nsn5q-config-lmtvg"] Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.588683 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.588751 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vc9\" (UniqueName: \"kubernetes.io/projected/47495381-0bd0-460b-a6e4-e94ff273e610-kube-api-access-w2vc9\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.588798 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run-ovn\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.588915 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-additional-scripts\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.588947 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-log-ovn\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.589005 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-scripts\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690377 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690443 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vc9\" (UniqueName: \"kubernetes.io/projected/47495381-0bd0-460b-a6e4-e94ff273e610-kube-api-access-w2vc9\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690482 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run-ovn\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690532 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-additional-scripts\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690555 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-log-ovn\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690610 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-scripts\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690752 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.690752 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-log-ovn\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.691517 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-additional-scripts\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.692661 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run-ovn\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.693753 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-scripts\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.712962 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2vc9\" (UniqueName: \"kubernetes.io/projected/47495381-0bd0-460b-a6e4-e94ff273e610-kube-api-access-w2vc9\") pod \"ovn-controller-nsn5q-config-lmtvg\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:25 crc kubenswrapper[4817]: I0314 05:51:25.774818 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:26 crc kubenswrapper[4817]: W0314 05:51:26.639161 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47495381_0bd0_460b_a6e4_e94ff273e610.slice/crio-23b7ad1401e21e42e8737a66a87dee9c324b266906edfcd11e8498fe00bde43d WatchSource:0}: Error finding container 23b7ad1401e21e42e8737a66a87dee9c324b266906edfcd11e8498fe00bde43d: Status 404 returned error can't find the container with id 23b7ad1401e21e42e8737a66a87dee9c324b266906edfcd11e8498fe00bde43d Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.639726 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nsn5q-config-lmtvg"] Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.688095 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gp8wg"] Mar 14 05:51:26 crc kubenswrapper[4817]: W0314 05:51:26.695202 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d16d505_b626_4d62_8688_688edc7182c2.slice/crio-26e10b8d512b09be5bdb7645c5dfd23b7d2ef598efcf7eedbab40a802becb2dc WatchSource:0}: Error finding container 26e10b8d512b09be5bdb7645c5dfd23b7d2ef598efcf7eedbab40a802becb2dc: Status 404 returned error can't find the container with id 26e10b8d512b09be5bdb7645c5dfd23b7d2ef598efcf7eedbab40a802becb2dc Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.701686 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.816433 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f20e21c5-3d26-4494-a4d7-43323e059f31","Type":"ContainerStarted","Data":"cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816"} Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.816725 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.818061 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nsn5q-config-lmtvg" event={"ID":"47495381-0bd0-460b-a6e4-e94ff273e610","Type":"ContainerStarted","Data":"23b7ad1401e21e42e8737a66a87dee9c324b266906edfcd11e8498fe00bde43d"} Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.819643 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gp8wg" event={"ID":"7d16d505-b626-4d62-8688-688edc7182c2","Type":"ContainerStarted","Data":"26e10b8d512b09be5bdb7645c5dfd23b7d2ef598efcf7eedbab40a802becb2dc"} Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.824265 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa3ffb28-ad8e-4691-a5ff-ae17d083a019","Type":"ContainerStarted","Data":"0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9"} Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.824667 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 14 05:51:26 crc kubenswrapper[4817]: I0314 05:51:26.845911 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.45954584 podStartE2EDuration="1m16.845878913s" podCreationTimestamp="2026-03-14 05:50:10 +0000 UTC" firstStartedPulling="2026-03-14 05:50:12.093763247 +0000 UTC m=+1066.132023993" lastFinishedPulling="2026-03-14 05:50:48.48009632 +0000 UTC m=+1102.518357066" observedRunningTime="2026-03-14 05:51:26.840771767 +0000 UTC m=+1140.879032533" watchObservedRunningTime="2026-03-14 05:51:26.845878913 +0000 UTC m=+1140.884139659" Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.836419 4817 generic.go:334] "Generic (PLEG): container finished" podID="47495381-0bd0-460b-a6e4-e94ff273e610" containerID="465bc17c5eb46eb039b8455167d1b8725fd8ae8d1993542f02cda40fa1323d4b" exitCode=0 Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.836491 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nsn5q-config-lmtvg" event={"ID":"47495381-0bd0-460b-a6e4-e94ff273e610","Type":"ContainerDied","Data":"465bc17c5eb46eb039b8455167d1b8725fd8ae8d1993542f02cda40fa1323d4b"} Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.838513 4817 generic.go:334] "Generic (PLEG): container finished" podID="7d16d505-b626-4d62-8688-688edc7182c2" containerID="46ae8e5162f6de03729b64304ff13a04a954dafe9262a20b47af1bb832126392" exitCode=0 Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.838560 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gp8wg" event={"ID":"7d16d505-b626-4d62-8688-688edc7182c2","Type":"ContainerDied","Data":"46ae8e5162f6de03729b64304ff13a04a954dafe9262a20b47af1bb832126392"} Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.841267 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zzk9h" event={"ID":"081334e9-833f-4a52-893e-29c7ac2241ac","Type":"ContainerStarted","Data":"c80b48c2fb2414ea6566f071e90ddeaa2e5555fbd1a4adafcca10b4d8e8ffb0b"} Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.868931 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.457890581 podStartE2EDuration="1m17.86891382s" podCreationTimestamp="2026-03-14 05:50:10 +0000 UTC" firstStartedPulling="2026-03-14 05:50:12.250746921 +0000 UTC m=+1066.289007667" lastFinishedPulling="2026-03-14 05:50:48.66177016 +0000 UTC m=+1102.700030906" observedRunningTime="2026-03-14 05:51:26.868219089 +0000 UTC m=+1140.906479855" watchObservedRunningTime="2026-03-14 05:51:27.86891382 +0000 UTC m=+1141.907174566" Mar 14 05:51:27 crc kubenswrapper[4817]: I0314 05:51:27.883517 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zzk9h" podStartSLOduration=3.182745763 podStartE2EDuration="13.883495375s" podCreationTimestamp="2026-03-14 05:51:14 +0000 UTC" firstStartedPulling="2026-03-14 05:51:15.641419714 +0000 UTC m=+1129.679680460" lastFinishedPulling="2026-03-14 05:51:26.342169336 +0000 UTC m=+1140.380430072" observedRunningTime="2026-03-14 05:51:27.877785523 +0000 UTC m=+1141.916046269" watchObservedRunningTime="2026-03-14 05:51:27.883495375 +0000 UTC m=+1141.921756121" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.287421 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.292998 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.353624 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2vc9\" (UniqueName: \"kubernetes.io/projected/47495381-0bd0-460b-a6e4-e94ff273e610-kube-api-access-w2vc9\") pod \"47495381-0bd0-460b-a6e4-e94ff273e610\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.353982 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-additional-scripts\") pod \"47495381-0bd0-460b-a6e4-e94ff273e610\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354052 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgkw9\" (UniqueName: \"kubernetes.io/projected/7d16d505-b626-4d62-8688-688edc7182c2-kube-api-access-qgkw9\") pod \"7d16d505-b626-4d62-8688-688edc7182c2\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354074 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d16d505-b626-4d62-8688-688edc7182c2-operator-scripts\") pod \"7d16d505-b626-4d62-8688-688edc7182c2\" (UID: \"7d16d505-b626-4d62-8688-688edc7182c2\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354107 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-log-ovn\") pod \"47495381-0bd0-460b-a6e4-e94ff273e610\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354136 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run-ovn\") pod \"47495381-0bd0-460b-a6e4-e94ff273e610\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354185 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-scripts\") pod \"47495381-0bd0-460b-a6e4-e94ff273e610\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354237 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run\") pod \"47495381-0bd0-460b-a6e4-e94ff273e610\" (UID: \"47495381-0bd0-460b-a6e4-e94ff273e610\") " Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354532 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "47495381-0bd0-460b-a6e4-e94ff273e610" (UID: "47495381-0bd0-460b-a6e4-e94ff273e610"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354566 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "47495381-0bd0-460b-a6e4-e94ff273e610" (UID: "47495381-0bd0-460b-a6e4-e94ff273e610"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354682 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run" (OuterVolumeSpecName: "var-run") pod "47495381-0bd0-460b-a6e4-e94ff273e610" (UID: "47495381-0bd0-460b-a6e4-e94ff273e610"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.354773 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "47495381-0bd0-460b-a6e4-e94ff273e610" (UID: "47495381-0bd0-460b-a6e4-e94ff273e610"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.355708 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d16d505-b626-4d62-8688-688edc7182c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d16d505-b626-4d62-8688-688edc7182c2" (UID: "7d16d505-b626-4d62-8688-688edc7182c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.355965 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-scripts" (OuterVolumeSpecName: "scripts") pod "47495381-0bd0-460b-a6e4-e94ff273e610" (UID: "47495381-0bd0-460b-a6e4-e94ff273e610"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.356155 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.356177 4817 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.356190 4817 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47495381-0bd0-460b-a6e4-e94ff273e610-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.356204 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d16d505-b626-4d62-8688-688edc7182c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.356255 4817 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.356268 4817 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47495381-0bd0-460b-a6e4-e94ff273e610-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.362726 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47495381-0bd0-460b-a6e4-e94ff273e610-kube-api-access-w2vc9" (OuterVolumeSpecName: "kube-api-access-w2vc9") pod "47495381-0bd0-460b-a6e4-e94ff273e610" (UID: "47495381-0bd0-460b-a6e4-e94ff273e610"). InnerVolumeSpecName "kube-api-access-w2vc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.362862 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d16d505-b626-4d62-8688-688edc7182c2-kube-api-access-qgkw9" (OuterVolumeSpecName: "kube-api-access-qgkw9") pod "7d16d505-b626-4d62-8688-688edc7182c2" (UID: "7d16d505-b626-4d62-8688-688edc7182c2"). InnerVolumeSpecName "kube-api-access-qgkw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.457985 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgkw9\" (UniqueName: \"kubernetes.io/projected/7d16d505-b626-4d62-8688-688edc7182c2-kube-api-access-qgkw9\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.458019 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2vc9\" (UniqueName: \"kubernetes.io/projected/47495381-0bd0-460b-a6e4-e94ff273e610-kube-api-access-w2vc9\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.872066 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nsn5q-config-lmtvg" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.872414 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nsn5q-config-lmtvg" event={"ID":"47495381-0bd0-460b-a6e4-e94ff273e610","Type":"ContainerDied","Data":"23b7ad1401e21e42e8737a66a87dee9c324b266906edfcd11e8498fe00bde43d"} Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.872458 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23b7ad1401e21e42e8737a66a87dee9c324b266906edfcd11e8498fe00bde43d" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.874454 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gp8wg" event={"ID":"7d16d505-b626-4d62-8688-688edc7182c2","Type":"ContainerDied","Data":"26e10b8d512b09be5bdb7645c5dfd23b7d2ef598efcf7eedbab40a802becb2dc"} Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.874474 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e10b8d512b09be5bdb7645c5dfd23b7d2ef598efcf7eedbab40a802becb2dc" Mar 14 05:51:29 crc kubenswrapper[4817]: I0314 05:51:29.874585 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gp8wg" Mar 14 05:51:30 crc kubenswrapper[4817]: I0314 05:51:30.178428 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nsn5q" Mar 14 05:51:30 crc kubenswrapper[4817]: I0314 05:51:30.404844 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nsn5q-config-lmtvg"] Mar 14 05:51:30 crc kubenswrapper[4817]: I0314 05:51:30.414708 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nsn5q-config-lmtvg"] Mar 14 05:51:30 crc kubenswrapper[4817]: I0314 05:51:30.741857 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47495381-0bd0-460b-a6e4-e94ff273e610" path="/var/lib/kubelet/pods/47495381-0bd0-460b-a6e4-e94ff273e610/volumes" Mar 14 05:51:33 crc kubenswrapper[4817]: I0314 05:51:33.905066 4817 generic.go:334] "Generic (PLEG): container finished" podID="081334e9-833f-4a52-893e-29c7ac2241ac" containerID="c80b48c2fb2414ea6566f071e90ddeaa2e5555fbd1a4adafcca10b4d8e8ffb0b" exitCode=0 Mar 14 05:51:33 crc kubenswrapper[4817]: I0314 05:51:33.905115 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zzk9h" event={"ID":"081334e9-833f-4a52-893e-29c7ac2241ac","Type":"ContainerDied","Data":"c80b48c2fb2414ea6566f071e90ddeaa2e5555fbd1a4adafcca10b4d8e8ffb0b"} Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.282656 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.361239 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-combined-ca-bundle\") pod \"081334e9-833f-4a52-893e-29c7ac2241ac\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.361422 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-db-sync-config-data\") pod \"081334e9-833f-4a52-893e-29c7ac2241ac\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.361500 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-config-data\") pod \"081334e9-833f-4a52-893e-29c7ac2241ac\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.361541 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fltbh\" (UniqueName: \"kubernetes.io/projected/081334e9-833f-4a52-893e-29c7ac2241ac-kube-api-access-fltbh\") pod \"081334e9-833f-4a52-893e-29c7ac2241ac\" (UID: \"081334e9-833f-4a52-893e-29c7ac2241ac\") " Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.369499 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081334e9-833f-4a52-893e-29c7ac2241ac-kube-api-access-fltbh" (OuterVolumeSpecName: "kube-api-access-fltbh") pod "081334e9-833f-4a52-893e-29c7ac2241ac" (UID: "081334e9-833f-4a52-893e-29c7ac2241ac"). InnerVolumeSpecName "kube-api-access-fltbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.369637 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "081334e9-833f-4a52-893e-29c7ac2241ac" (UID: "081334e9-833f-4a52-893e-29c7ac2241ac"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.389247 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "081334e9-833f-4a52-893e-29c7ac2241ac" (UID: "081334e9-833f-4a52-893e-29c7ac2241ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.410642 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-config-data" (OuterVolumeSpecName: "config-data") pod "081334e9-833f-4a52-893e-29c7ac2241ac" (UID: "081334e9-833f-4a52-893e-29c7ac2241ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.464080 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.464139 4817 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.464152 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/081334e9-833f-4a52-893e-29c7ac2241ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.464165 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fltbh\" (UniqueName: \"kubernetes.io/projected/081334e9-833f-4a52-893e-29c7ac2241ac-kube-api-access-fltbh\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.923266 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zzk9h" event={"ID":"081334e9-833f-4a52-893e-29c7ac2241ac","Type":"ContainerDied","Data":"e421a5ba8772e7bf75cf0a9780b452782bd5ba53f48936021d7fa135f878e2ad"} Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.923319 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e421a5ba8772e7bf75cf0a9780b452782bd5ba53f48936021d7fa135f878e2ad" Mar 14 05:51:35 crc kubenswrapper[4817]: I0314 05:51:35.923367 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zzk9h" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.355924 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-bvsj2"] Mar 14 05:51:36 crc kubenswrapper[4817]: E0314 05:51:36.356610 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d16d505-b626-4d62-8688-688edc7182c2" containerName="mariadb-account-create-update" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.356624 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d16d505-b626-4d62-8688-688edc7182c2" containerName="mariadb-account-create-update" Mar 14 05:51:36 crc kubenswrapper[4817]: E0314 05:51:36.356639 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081334e9-833f-4a52-893e-29c7ac2241ac" containerName="glance-db-sync" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.356647 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="081334e9-833f-4a52-893e-29c7ac2241ac" containerName="glance-db-sync" Mar 14 05:51:36 crc kubenswrapper[4817]: E0314 05:51:36.356659 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47495381-0bd0-460b-a6e4-e94ff273e610" containerName="ovn-config" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.356665 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="47495381-0bd0-460b-a6e4-e94ff273e610" containerName="ovn-config" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.356828 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="47495381-0bd0-460b-a6e4-e94ff273e610" containerName="ovn-config" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.356838 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d16d505-b626-4d62-8688-688edc7182c2" containerName="mariadb-account-create-update" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.356853 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="081334e9-833f-4a52-893e-29c7ac2241ac" containerName="glance-db-sync" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.357735 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.362507 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-bvsj2"] Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.479627 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-config\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.479675 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-dns-svc\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.480089 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.480215 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.480269 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnzbk\" (UniqueName: \"kubernetes.io/projected/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-kube-api-access-cnzbk\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.582371 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.582447 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.582493 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnzbk\" (UniqueName: \"kubernetes.io/projected/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-kube-api-access-cnzbk\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.582543 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-config\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.582569 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-dns-svc\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.583509 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.583552 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.583562 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-dns-svc\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.584137 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-config\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.603726 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnzbk\" (UniqueName: \"kubernetes.io/projected/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-kube-api-access-cnzbk\") pod \"dnsmasq-dns-554567b4f7-bvsj2\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:36 crc kubenswrapper[4817]: I0314 05:51:36.694539 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:37 crc kubenswrapper[4817]: I0314 05:51:37.147359 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-bvsj2"] Mar 14 05:51:37 crc kubenswrapper[4817]: W0314 05:51:37.157215 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e2bf07_c945_4dd9_a5d7_dd451a807bbc.slice/crio-83f8f08fe8ec30738a88519ae4e5a367dd8021032c3330d09046e317ca54bf2a WatchSource:0}: Error finding container 83f8f08fe8ec30738a88519ae4e5a367dd8021032c3330d09046e317ca54bf2a: Status 404 returned error can't find the container with id 83f8f08fe8ec30738a88519ae4e5a367dd8021032c3330d09046e317ca54bf2a Mar 14 05:51:37 crc kubenswrapper[4817]: I0314 05:51:37.943855 4817 generic.go:334] "Generic (PLEG): container finished" podID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerID="d42ff316dc3dfd70aeebe6d056fcf9d108db2f43251b9f04ea3855a861d506c6" exitCode=0 Mar 14 05:51:37 crc kubenswrapper[4817]: I0314 05:51:37.943942 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" event={"ID":"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc","Type":"ContainerDied","Data":"d42ff316dc3dfd70aeebe6d056fcf9d108db2f43251b9f04ea3855a861d506c6"} Mar 14 05:51:37 crc kubenswrapper[4817]: I0314 05:51:37.944179 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" event={"ID":"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc","Type":"ContainerStarted","Data":"83f8f08fe8ec30738a88519ae4e5a367dd8021032c3330d09046e317ca54bf2a"} Mar 14 05:51:39 crc kubenswrapper[4817]: I0314 05:51:39.960220 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" event={"ID":"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc","Type":"ContainerStarted","Data":"a481e1f27ab921f9b3d1a21d47f880a5a26f5864e8c43e5dfb08a8519a2de2e9"} Mar 14 05:51:39 crc kubenswrapper[4817]: I0314 05:51:39.960805 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:39 crc kubenswrapper[4817]: I0314 05:51:39.985041 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" podStartSLOduration=3.985022613 podStartE2EDuration="3.985022613s" podCreationTimestamp="2026-03-14 05:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:39.980632078 +0000 UTC m=+1154.018892824" watchObservedRunningTime="2026-03-14 05:51:39.985022613 +0000 UTC m=+1154.023283359" Mar 14 05:51:41 crc kubenswrapper[4817]: I0314 05:51:41.406126 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:51:41 crc kubenswrapper[4817]: I0314 05:51:41.749109 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.193461 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-97754"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.202473 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.202932 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-97754"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.284794 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04aca68c-f19d-46d3-a950-92b0c5aec127-operator-scripts\") pod \"cinder-db-create-97754\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.285244 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqq68\" (UniqueName: \"kubernetes.io/projected/04aca68c-f19d-46d3-a950-92b0c5aec127-kube-api-access-kqq68\") pod \"cinder-db-create-97754\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.297289 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-345b-account-create-update-7ww2n"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.298932 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.301281 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.314042 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-345b-account-create-update-7ww2n"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.386123 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z824r\" (UniqueName: \"kubernetes.io/projected/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-kube-api-access-z824r\") pod \"cinder-345b-account-create-update-7ww2n\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.386587 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04aca68c-f19d-46d3-a950-92b0c5aec127-operator-scripts\") pod \"cinder-db-create-97754\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.386766 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-operator-scripts\") pod \"cinder-345b-account-create-update-7ww2n\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.386974 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqq68\" (UniqueName: \"kubernetes.io/projected/04aca68c-f19d-46d3-a950-92b0c5aec127-kube-api-access-kqq68\") pod \"cinder-db-create-97754\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.387470 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04aca68c-f19d-46d3-a950-92b0c5aec127-operator-scripts\") pod \"cinder-db-create-97754\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.408220 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqq68\" (UniqueName: \"kubernetes.io/projected/04aca68c-f19d-46d3-a950-92b0c5aec127-kube-api-access-kqq68\") pod \"cinder-db-create-97754\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.488163 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-operator-scripts\") pod \"cinder-345b-account-create-update-7ww2n\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.488287 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z824r\" (UniqueName: \"kubernetes.io/projected/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-kube-api-access-z824r\") pod \"cinder-345b-account-create-update-7ww2n\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.489471 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-operator-scripts\") pod \"cinder-345b-account-create-update-7ww2n\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.491976 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-t7fj5"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.493141 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.500540 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3a58-account-create-update-qt2nl"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.501581 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.503167 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.508676 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z824r\" (UniqueName: \"kubernetes.io/projected/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-kube-api-access-z824r\") pod \"cinder-345b-account-create-update-7ww2n\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.513431 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-t7fj5"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.523238 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97754" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.525786 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3a58-account-create-update-qt2nl"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.615348 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.626221 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-chbsl"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.629961 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.642568 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-chbsl"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.692397 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-operator-scripts\") pod \"barbican-3a58-account-create-update-qt2nl\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.692467 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvjm\" (UniqueName: \"kubernetes.io/projected/8099b872-793e-4e42-816d-a6ca9b72624c-kube-api-access-cwvjm\") pod \"barbican-db-create-t7fj5\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.692513 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8099b872-793e-4e42-816d-a6ca9b72624c-operator-scripts\") pod \"barbican-db-create-t7fj5\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.692593 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll55v\" (UniqueName: \"kubernetes.io/projected/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-kube-api-access-ll55v\") pod \"barbican-3a58-account-create-update-qt2nl\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.718993 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-856gn"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.720699 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.724400 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r7ltc" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.724725 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.724820 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.724855 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.728125 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d4d1-account-create-update-4v66d"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.729654 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.732685 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.736098 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-856gn"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.745768 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d4d1-account-create-update-4v66d"] Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.794255 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8099b872-793e-4e42-816d-a6ca9b72624c-operator-scripts\") pod \"barbican-db-create-t7fj5\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.795219 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll55v\" (UniqueName: \"kubernetes.io/projected/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-kube-api-access-ll55v\") pod \"barbican-3a58-account-create-update-qt2nl\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.795141 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8099b872-793e-4e42-816d-a6ca9b72624c-operator-scripts\") pod \"barbican-db-create-t7fj5\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.795292 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-operator-scripts\") pod \"barbican-3a58-account-create-update-qt2nl\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.795319 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8vf\" (UniqueName: \"kubernetes.io/projected/8c36c132-bd25-4067-af91-a4ae33514875-kube-api-access-ss8vf\") pod \"neutron-db-create-chbsl\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.795417 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c36c132-bd25-4067-af91-a4ae33514875-operator-scripts\") pod \"neutron-db-create-chbsl\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.795461 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwvjm\" (UniqueName: \"kubernetes.io/projected/8099b872-793e-4e42-816d-a6ca9b72624c-kube-api-access-cwvjm\") pod \"barbican-db-create-t7fj5\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.796681 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-operator-scripts\") pod \"barbican-3a58-account-create-update-qt2nl\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.817291 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll55v\" (UniqueName: \"kubernetes.io/projected/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-kube-api-access-ll55v\") pod \"barbican-3a58-account-create-update-qt2nl\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.819222 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwvjm\" (UniqueName: \"kubernetes.io/projected/8099b872-793e-4e42-816d-a6ca9b72624c-kube-api-access-cwvjm\") pod \"barbican-db-create-t7fj5\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.850681 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.896880 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g768d\" (UniqueName: \"kubernetes.io/projected/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-kube-api-access-g768d\") pod \"neutron-d4d1-account-create-update-4v66d\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.896954 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-combined-ca-bundle\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.896981 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8vf\" (UniqueName: \"kubernetes.io/projected/8c36c132-bd25-4067-af91-a4ae33514875-kube-api-access-ss8vf\") pod \"neutron-db-create-chbsl\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.897002 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c36c132-bd25-4067-af91-a4ae33514875-operator-scripts\") pod \"neutron-db-create-chbsl\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.898160 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c36c132-bd25-4067-af91-a4ae33514875-operator-scripts\") pod \"neutron-db-create-chbsl\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.898595 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-operator-scripts\") pod \"neutron-d4d1-account-create-update-4v66d\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.898655 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-config-data\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.898678 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhtl\" (UniqueName: \"kubernetes.io/projected/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-kube-api-access-8zhtl\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.917534 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8vf\" (UniqueName: \"kubernetes.io/projected/8c36c132-bd25-4067-af91-a4ae33514875-kube-api-access-ss8vf\") pod \"neutron-db-create-chbsl\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:43 crc kubenswrapper[4817]: I0314 05:51:43.997120 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.000465 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zhtl\" (UniqueName: \"kubernetes.io/projected/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-kube-api-access-8zhtl\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.000596 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g768d\" (UniqueName: \"kubernetes.io/projected/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-kube-api-access-g768d\") pod \"neutron-d4d1-account-create-update-4v66d\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.000644 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-combined-ca-bundle\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.000712 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-operator-scripts\") pod \"neutron-d4d1-account-create-update-4v66d\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.000773 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-config-data\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.002591 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-operator-scripts\") pod \"neutron-d4d1-account-create-update-4v66d\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.005652 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-config-data\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.013095 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-combined-ca-bundle\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.018413 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zhtl\" (UniqueName: \"kubernetes.io/projected/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-kube-api-access-8zhtl\") pod \"keystone-db-sync-856gn\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.020763 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g768d\" (UniqueName: \"kubernetes.io/projected/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-kube-api-access-g768d\") pod \"neutron-d4d1-account-create-update-4v66d\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.047273 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-856gn" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.070043 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.119092 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.152984 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-97754"] Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.269219 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-345b-account-create-update-7ww2n"] Mar 14 05:51:44 crc kubenswrapper[4817]: W0314 05:51:44.296328 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce266d6e_a44d_41d8_ba3d_b1a4f8b3c485.slice/crio-2d45b9fa96f3a1f690eee48b5cb97809dc153c12d194cceb10a863854da61a1d WatchSource:0}: Error finding container 2d45b9fa96f3a1f690eee48b5cb97809dc153c12d194cceb10a863854da61a1d: Status 404 returned error can't find the container with id 2d45b9fa96f3a1f690eee48b5cb97809dc153c12d194cceb10a863854da61a1d Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.416325 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3a58-account-create-update-qt2nl"] Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.593431 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-chbsl"] Mar 14 05:51:44 crc kubenswrapper[4817]: W0314 05:51:44.746394 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcb7120_12b3_4b62_9da4_e8c44e8a3567.slice/crio-28b94c23aedf00b235b90e7107f55b9004fea3914bde845b1dd2e69553ef0858 WatchSource:0}: Error finding container 28b94c23aedf00b235b90e7107f55b9004fea3914bde845b1dd2e69553ef0858: Status 404 returned error can't find the container with id 28b94c23aedf00b235b90e7107f55b9004fea3914bde845b1dd2e69553ef0858 Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.759981 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d4d1-account-create-update-4v66d"] Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.803176 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-856gn"] Mar 14 05:51:44 crc kubenswrapper[4817]: I0314 05:51:44.887535 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-t7fj5"] Mar 14 05:51:44 crc kubenswrapper[4817]: W0314 05:51:44.899373 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8099b872_793e_4e42_816d_a6ca9b72624c.slice/crio-8db7dedf74a8b07452fee34a125496918cb56e8740247e84d7210aa6aabbb763 WatchSource:0}: Error finding container 8db7dedf74a8b07452fee34a125496918cb56e8740247e84d7210aa6aabbb763: Status 404 returned error can't find the container with id 8db7dedf74a8b07452fee34a125496918cb56e8740247e84d7210aa6aabbb763 Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.010467 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chbsl" event={"ID":"8c36c132-bd25-4067-af91-a4ae33514875","Type":"ContainerStarted","Data":"9571fb212edc00ddf1ff1393bc72ef6fed33836b4b5b897ea2b0ed3c3d535db4"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.026388 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a58-account-create-update-qt2nl" event={"ID":"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5","Type":"ContainerStarted","Data":"9ef359b71ca51968e7558a2fd802642b825278b6ae0dc00f487d31f2953ace70"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.026432 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a58-account-create-update-qt2nl" event={"ID":"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5","Type":"ContainerStarted","Data":"09f319b971941e8c2d9b870c921be32cb38852c07b547fc5088cd136a55607e0"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.028675 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97754" event={"ID":"04aca68c-f19d-46d3-a950-92b0c5aec127","Type":"ContainerStarted","Data":"84fbcd97ed817f46ddfada6eb1dda57fd18ae6f3fa71dede60be4b3c90117508"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.028702 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97754" event={"ID":"04aca68c-f19d-46d3-a950-92b0c5aec127","Type":"ContainerStarted","Data":"44f38dc747af6115fee071e7d06860dd27af67ceefc67bfaa47541e81150d270"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.031873 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-345b-account-create-update-7ww2n" event={"ID":"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485","Type":"ContainerStarted","Data":"bc669cc178cb9c4bd6993f56f76afd2250667e26e9cd839910ee396d6fafe5d2"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.031959 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-345b-account-create-update-7ww2n" event={"ID":"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485","Type":"ContainerStarted","Data":"2d45b9fa96f3a1f690eee48b5cb97809dc153c12d194cceb10a863854da61a1d"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.034294 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-856gn" event={"ID":"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb","Type":"ContainerStarted","Data":"e20c92256084579a6e1a588f85dec4d64bb0ac2c2ce4a484b0b752f9a2415963"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.035880 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-update-4v66d" event={"ID":"1bcb7120-12b3-4b62-9da4-e8c44e8a3567","Type":"ContainerStarted","Data":"28b94c23aedf00b235b90e7107f55b9004fea3914bde845b1dd2e69553ef0858"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.036981 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t7fj5" event={"ID":"8099b872-793e-4e42-816d-a6ca9b72624c","Type":"ContainerStarted","Data":"8db7dedf74a8b07452fee34a125496918cb56e8740247e84d7210aa6aabbb763"} Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.058639 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-97754" podStartSLOduration=2.058600715 podStartE2EDuration="2.058600715s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:45.054223261 +0000 UTC m=+1159.092484007" watchObservedRunningTime="2026-03-14 05:51:45.058600715 +0000 UTC m=+1159.096861471" Mar 14 05:51:45 crc kubenswrapper[4817]: I0314 05:51:45.074114 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-345b-account-create-update-7ww2n" podStartSLOduration=2.074098116 podStartE2EDuration="2.074098116s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:45.066915122 +0000 UTC m=+1159.105175888" watchObservedRunningTime="2026-03-14 05:51:45.074098116 +0000 UTC m=+1159.112358862" Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.081166 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chbsl" event={"ID":"8c36c132-bd25-4067-af91-a4ae33514875","Type":"ContainerStarted","Data":"884666666e547801c7d8c3bb822ca24423ddc55626c3fe4332777f97654af540"} Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.084270 4817 generic.go:334] "Generic (PLEG): container finished" podID="04aca68c-f19d-46d3-a950-92b0c5aec127" containerID="84fbcd97ed817f46ddfada6eb1dda57fd18ae6f3fa71dede60be4b3c90117508" exitCode=0 Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.084565 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97754" event={"ID":"04aca68c-f19d-46d3-a950-92b0c5aec127","Type":"ContainerDied","Data":"84fbcd97ed817f46ddfada6eb1dda57fd18ae6f3fa71dede60be4b3c90117508"} Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.086075 4817 generic.go:334] "Generic (PLEG): container finished" podID="ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" containerID="bc669cc178cb9c4bd6993f56f76afd2250667e26e9cd839910ee396d6fafe5d2" exitCode=0 Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.086229 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-345b-account-create-update-7ww2n" event={"ID":"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485","Type":"ContainerDied","Data":"bc669cc178cb9c4bd6993f56f76afd2250667e26e9cd839910ee396d6fafe5d2"} Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.087283 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-update-4v66d" event={"ID":"1bcb7120-12b3-4b62-9da4-e8c44e8a3567","Type":"ContainerStarted","Data":"21c23c877d2a3de92546fed1dab823be773e2add6cf5c304f62c43ef13aa6b0b"} Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.088733 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t7fj5" event={"ID":"8099b872-793e-4e42-816d-a6ca9b72624c","Type":"ContainerStarted","Data":"40eca744f5a6d9c042b7ec994be3afb3d2943a8e926895a0e56dff7a602e113b"} Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.107134 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-chbsl" podStartSLOduration=3.1071067980000002 podStartE2EDuration="3.107106798s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:46.100182431 +0000 UTC m=+1160.138443207" watchObservedRunningTime="2026-03-14 05:51:46.107106798 +0000 UTC m=+1160.145367544" Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.213274 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-t7fj5" podStartSLOduration=3.213243019 podStartE2EDuration="3.213243019s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:46.138154421 +0000 UTC m=+1160.176415167" watchObservedRunningTime="2026-03-14 05:51:46.213243019 +0000 UTC m=+1160.251503765" Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.273621 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d4d1-account-create-update-4v66d" podStartSLOduration=3.273585036 podStartE2EDuration="3.273585036s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:46.19679235 +0000 UTC m=+1160.235053096" watchObservedRunningTime="2026-03-14 05:51:46.273585036 +0000 UTC m=+1160.311845782" Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.317494 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3a58-account-create-update-qt2nl" podStartSLOduration=3.317459895 podStartE2EDuration="3.317459895s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:51:46.300520033 +0000 UTC m=+1160.338780789" watchObservedRunningTime="2026-03-14 05:51:46.317459895 +0000 UTC m=+1160.355720641" Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.696254 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.777724 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jwq5b"] Mar 14 05:51:46 crc kubenswrapper[4817]: I0314 05:51:46.778085 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-jwq5b" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="dnsmasq-dns" containerID="cri-o://d1d90b32ea6f4232313b9c1dc5186a0547158cfd45a9d7d611729b5e27383c0c" gracePeriod=10 Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.096166 4817 generic.go:334] "Generic (PLEG): container finished" podID="8099b872-793e-4e42-816d-a6ca9b72624c" containerID="40eca744f5a6d9c042b7ec994be3afb3d2943a8e926895a0e56dff7a602e113b" exitCode=0 Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.096257 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t7fj5" event={"ID":"8099b872-793e-4e42-816d-a6ca9b72624c","Type":"ContainerDied","Data":"40eca744f5a6d9c042b7ec994be3afb3d2943a8e926895a0e56dff7a602e113b"} Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.406881 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97754" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.493490 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqq68\" (UniqueName: \"kubernetes.io/projected/04aca68c-f19d-46d3-a950-92b0c5aec127-kube-api-access-kqq68\") pod \"04aca68c-f19d-46d3-a950-92b0c5aec127\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.493568 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04aca68c-f19d-46d3-a950-92b0c5aec127-operator-scripts\") pod \"04aca68c-f19d-46d3-a950-92b0c5aec127\" (UID: \"04aca68c-f19d-46d3-a950-92b0c5aec127\") " Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.494284 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04aca68c-f19d-46d3-a950-92b0c5aec127-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04aca68c-f19d-46d3-a950-92b0c5aec127" (UID: "04aca68c-f19d-46d3-a950-92b0c5aec127"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.499230 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04aca68c-f19d-46d3-a950-92b0c5aec127-kube-api-access-kqq68" (OuterVolumeSpecName: "kube-api-access-kqq68") pod "04aca68c-f19d-46d3-a950-92b0c5aec127" (UID: "04aca68c-f19d-46d3-a950-92b0c5aec127"). InnerVolumeSpecName "kube-api-access-kqq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.538529 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.595249 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z824r\" (UniqueName: \"kubernetes.io/projected/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-kube-api-access-z824r\") pod \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.595376 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-operator-scripts\") pod \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\" (UID: \"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485\") " Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.595657 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqq68\" (UniqueName: \"kubernetes.io/projected/04aca68c-f19d-46d3-a950-92b0c5aec127-kube-api-access-kqq68\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.595677 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04aca68c-f19d-46d3-a950-92b0c5aec127-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.595866 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" (UID: "ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.598883 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-kube-api-access-z824r" (OuterVolumeSpecName: "kube-api-access-z824r") pod "ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" (UID: "ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485"). InnerVolumeSpecName "kube-api-access-z824r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.696983 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:47 crc kubenswrapper[4817]: I0314 05:51:47.697032 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z824r\" (UniqueName: \"kubernetes.io/projected/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485-kube-api-access-z824r\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.112568 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-97754" event={"ID":"04aca68c-f19d-46d3-a950-92b0c5aec127","Type":"ContainerDied","Data":"44f38dc747af6115fee071e7d06860dd27af67ceefc67bfaa47541e81150d270"} Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.113161 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f38dc747af6115fee071e7d06860dd27af67ceefc67bfaa47541e81150d270" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.112630 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-97754" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.115730 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-345b-account-create-update-7ww2n" event={"ID":"ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485","Type":"ContainerDied","Data":"2d45b9fa96f3a1f690eee48b5cb97809dc153c12d194cceb10a863854da61a1d"} Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.115874 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d45b9fa96f3a1f690eee48b5cb97809dc153c12d194cceb10a863854da61a1d" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.115939 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-345b-account-create-update-7ww2n" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.122082 4817 generic.go:334] "Generic (PLEG): container finished" podID="26e76627-0df6-400a-981f-8672983e6741" containerID="d1d90b32ea6f4232313b9c1dc5186a0547158cfd45a9d7d611729b5e27383c0c" exitCode=0 Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.122191 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jwq5b" event={"ID":"26e76627-0df6-400a-981f-8672983e6741","Type":"ContainerDied","Data":"d1d90b32ea6f4232313b9c1dc5186a0547158cfd45a9d7d611729b5e27383c0c"} Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.399929 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.411857 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8099b872-793e-4e42-816d-a6ca9b72624c-operator-scripts\") pod \"8099b872-793e-4e42-816d-a6ca9b72624c\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.411995 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwvjm\" (UniqueName: \"kubernetes.io/projected/8099b872-793e-4e42-816d-a6ca9b72624c-kube-api-access-cwvjm\") pod \"8099b872-793e-4e42-816d-a6ca9b72624c\" (UID: \"8099b872-793e-4e42-816d-a6ca9b72624c\") " Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.412174 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8099b872-793e-4e42-816d-a6ca9b72624c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8099b872-793e-4e42-816d-a6ca9b72624c" (UID: "8099b872-793e-4e42-816d-a6ca9b72624c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.414600 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8099b872-793e-4e42-816d-a6ca9b72624c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.421066 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8099b872-793e-4e42-816d-a6ca9b72624c-kube-api-access-cwvjm" (OuterVolumeSpecName: "kube-api-access-cwvjm") pod "8099b872-793e-4e42-816d-a6ca9b72624c" (UID: "8099b872-793e-4e42-816d-a6ca9b72624c"). InnerVolumeSpecName "kube-api-access-cwvjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.516428 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwvjm\" (UniqueName: \"kubernetes.io/projected/8099b872-793e-4e42-816d-a6ca9b72624c-kube-api-access-cwvjm\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:48 crc kubenswrapper[4817]: I0314 05:51:48.920306 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-jwq5b" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Mar 14 05:51:49 crc kubenswrapper[4817]: I0314 05:51:49.130199 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t7fj5" event={"ID":"8099b872-793e-4e42-816d-a6ca9b72624c","Type":"ContainerDied","Data":"8db7dedf74a8b07452fee34a125496918cb56e8740247e84d7210aa6aabbb763"} Mar 14 05:51:49 crc kubenswrapper[4817]: I0314 05:51:49.130240 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8db7dedf74a8b07452fee34a125496918cb56e8740247e84d7210aa6aabbb763" Mar 14 05:51:49 crc kubenswrapper[4817]: I0314 05:51:49.130312 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t7fj5" Mar 14 05:51:50 crc kubenswrapper[4817]: I0314 05:51:50.139645 4817 generic.go:334] "Generic (PLEG): container finished" podID="8c36c132-bd25-4067-af91-a4ae33514875" containerID="884666666e547801c7d8c3bb822ca24423ddc55626c3fe4332777f97654af540" exitCode=0 Mar 14 05:51:50 crc kubenswrapper[4817]: I0314 05:51:50.139733 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chbsl" event={"ID":"8c36c132-bd25-4067-af91-a4ae33514875","Type":"ContainerDied","Data":"884666666e547801c7d8c3bb822ca24423ddc55626c3fe4332777f97654af540"} Mar 14 05:51:50 crc kubenswrapper[4817]: I0314 05:51:50.145205 4817 generic.go:334] "Generic (PLEG): container finished" podID="bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" containerID="9ef359b71ca51968e7558a2fd802642b825278b6ae0dc00f487d31f2953ace70" exitCode=0 Mar 14 05:51:50 crc kubenswrapper[4817]: I0314 05:51:50.145245 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a58-account-create-update-qt2nl" event={"ID":"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5","Type":"ContainerDied","Data":"9ef359b71ca51968e7558a2fd802642b825278b6ae0dc00f487d31f2953ace70"} Mar 14 05:51:51 crc kubenswrapper[4817]: I0314 05:51:51.154353 4817 generic.go:334] "Generic (PLEG): container finished" podID="1bcb7120-12b3-4b62-9da4-e8c44e8a3567" containerID="21c23c877d2a3de92546fed1dab823be773e2add6cf5c304f62c43ef13aa6b0b" exitCode=0 Mar 14 05:51:51 crc kubenswrapper[4817]: I0314 05:51:51.154441 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-update-4v66d" event={"ID":"1bcb7120-12b3-4b62-9da4-e8c44e8a3567","Type":"ContainerDied","Data":"21c23c877d2a3de92546fed1dab823be773e2add6cf5c304f62c43ef13aa6b0b"} Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.117824 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.124783 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.131163 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.141809 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146340 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-sb\") pod \"26e76627-0df6-400a-981f-8672983e6741\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146391 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn6b2\" (UniqueName: \"kubernetes.io/projected/26e76627-0df6-400a-981f-8672983e6741-kube-api-access-rn6b2\") pod \"26e76627-0df6-400a-981f-8672983e6741\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146421 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-operator-scripts\") pod \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146497 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-config\") pod \"26e76627-0df6-400a-981f-8672983e6741\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146547 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c36c132-bd25-4067-af91-a4ae33514875-operator-scripts\") pod \"8c36c132-bd25-4067-af91-a4ae33514875\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146573 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-nb\") pod \"26e76627-0df6-400a-981f-8672983e6741\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146649 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss8vf\" (UniqueName: \"kubernetes.io/projected/8c36c132-bd25-4067-af91-a4ae33514875-kube-api-access-ss8vf\") pod \"8c36c132-bd25-4067-af91-a4ae33514875\" (UID: \"8c36c132-bd25-4067-af91-a4ae33514875\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146765 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll55v\" (UniqueName: \"kubernetes.io/projected/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-kube-api-access-ll55v\") pod \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\" (UID: \"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.146800 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-dns-svc\") pod \"26e76627-0df6-400a-981f-8672983e6741\" (UID: \"26e76627-0df6-400a-981f-8672983e6741\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.149626 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c36c132-bd25-4067-af91-a4ae33514875-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c36c132-bd25-4067-af91-a4ae33514875" (UID: "8c36c132-bd25-4067-af91-a4ae33514875"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.150325 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" (UID: "bcd8f446-ad9f-42b4-8709-4c5ca19a69b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.160068 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c36c132-bd25-4067-af91-a4ae33514875-kube-api-access-ss8vf" (OuterVolumeSpecName: "kube-api-access-ss8vf") pod "8c36c132-bd25-4067-af91-a4ae33514875" (UID: "8c36c132-bd25-4067-af91-a4ae33514875"). InnerVolumeSpecName "kube-api-access-ss8vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.167204 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e76627-0df6-400a-981f-8672983e6741-kube-api-access-rn6b2" (OuterVolumeSpecName: "kube-api-access-rn6b2") pod "26e76627-0df6-400a-981f-8672983e6741" (UID: "26e76627-0df6-400a-981f-8672983e6741"). InnerVolumeSpecName "kube-api-access-rn6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.194343 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-kube-api-access-ll55v" (OuterVolumeSpecName: "kube-api-access-ll55v") pod "bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" (UID: "bcd8f446-ad9f-42b4-8709-4c5ca19a69b5"). InnerVolumeSpecName "kube-api-access-ll55v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.223024 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26e76627-0df6-400a-981f-8672983e6741" (UID: "26e76627-0df6-400a-981f-8672983e6741"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.223040 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-config" (OuterVolumeSpecName: "config") pod "26e76627-0df6-400a-981f-8672983e6741" (UID: "26e76627-0df6-400a-981f-8672983e6741"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.228534 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d4d1-account-create-update-4v66d" event={"ID":"1bcb7120-12b3-4b62-9da4-e8c44e8a3567","Type":"ContainerDied","Data":"28b94c23aedf00b235b90e7107f55b9004fea3914bde845b1dd2e69553ef0858"} Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.228579 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b94c23aedf00b235b90e7107f55b9004fea3914bde845b1dd2e69553ef0858" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.228649 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d4d1-account-create-update-4v66d" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.228779 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26e76627-0df6-400a-981f-8672983e6741" (UID: "26e76627-0df6-400a-981f-8672983e6741"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.234012 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-jwq5b" event={"ID":"26e76627-0df6-400a-981f-8672983e6741","Type":"ContainerDied","Data":"c639fa9492737899fdd888193124465d786915077d1709fbc6686613da26587a"} Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.234069 4817 scope.go:117] "RemoveContainer" containerID="d1d90b32ea6f4232313b9c1dc5186a0547158cfd45a9d7d611729b5e27383c0c" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.234211 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-jwq5b" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.242612 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26e76627-0df6-400a-981f-8672983e6741" (UID: "26e76627-0df6-400a-981f-8672983e6741"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.246277 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-chbsl" event={"ID":"8c36c132-bd25-4067-af91-a4ae33514875","Type":"ContainerDied","Data":"9571fb212edc00ddf1ff1393bc72ef6fed33836b4b5b897ea2b0ed3c3d535db4"} Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.246332 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9571fb212edc00ddf1ff1393bc72ef6fed33836b4b5b897ea2b0ed3c3d535db4" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.246440 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-chbsl" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.248710 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-operator-scripts\") pod \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.248757 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g768d\" (UniqueName: \"kubernetes.io/projected/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-kube-api-access-g768d\") pod \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\" (UID: \"1bcb7120-12b3-4b62-9da4-e8c44e8a3567\") " Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.251724 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259043 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn6b2\" (UniqueName: \"kubernetes.io/projected/26e76627-0df6-400a-981f-8672983e6741-kube-api-access-rn6b2\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259088 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259103 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259120 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c36c132-bd25-4067-af91-a4ae33514875-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259132 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259150 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss8vf\" (UniqueName: \"kubernetes.io/projected/8c36c132-bd25-4067-af91-a4ae33514875-kube-api-access-ss8vf\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259163 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll55v\" (UniqueName: \"kubernetes.io/projected/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5-kube-api-access-ll55v\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.259176 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26e76627-0df6-400a-981f-8672983e6741-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.257405 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3a58-account-create-update-qt2nl" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.255698 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bcb7120-12b3-4b62-9da4-e8c44e8a3567" (UID: "1bcb7120-12b3-4b62-9da4-e8c44e8a3567"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.257288 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3a58-account-create-update-qt2nl" event={"ID":"bcd8f446-ad9f-42b4-8709-4c5ca19a69b5","Type":"ContainerDied","Data":"09f319b971941e8c2d9b870c921be32cb38852c07b547fc5088cd136a55607e0"} Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.260195 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f319b971941e8c2d9b870c921be32cb38852c07b547fc5088cd136a55607e0" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.269273 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-kube-api-access-g768d" (OuterVolumeSpecName: "kube-api-access-g768d") pod "1bcb7120-12b3-4b62-9da4-e8c44e8a3567" (UID: "1bcb7120-12b3-4b62-9da4-e8c44e8a3567"). InnerVolumeSpecName "kube-api-access-g768d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.284778 4817 scope.go:117] "RemoveContainer" containerID="b90ae27da11aa98bc4fa842ac1f17a38e7e333eb595a2701ba5ed0703900b5fe" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.361212 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.361487 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g768d\" (UniqueName: \"kubernetes.io/projected/1bcb7120-12b3-4b62-9da4-e8c44e8a3567-kube-api-access-g768d\") on node \"crc\" DevicePath \"\"" Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.567961 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jwq5b"] Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.574377 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-jwq5b"] Mar 14 05:51:56 crc kubenswrapper[4817]: I0314 05:51:56.746717 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e76627-0df6-400a-981f-8672983e6741" path="/var/lib/kubelet/pods/26e76627-0df6-400a-981f-8672983e6741/volumes" Mar 14 05:51:58 crc kubenswrapper[4817]: E0314 05:51:58.640137 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Mar 14 05:51:58 crc kubenswrapper[4817]: E0314 05:51:58.640618 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zhtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-856gn_openstack(09b59299-5f4c-41d2-ad05-2d43b0c0cbfb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:51:58 crc kubenswrapper[4817]: E0314 05:51:58.641693 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-856gn" podUID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" Mar 14 05:51:58 crc kubenswrapper[4817]: I0314 05:51:58.921114 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-jwq5b" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Mar 14 05:51:59 crc kubenswrapper[4817]: E0314 05:51:59.282421 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-856gn" podUID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139078 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557792-b8788"] Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139411 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="init" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139425 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="init" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139434 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04aca68c-f19d-46d3-a950-92b0c5aec127" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139440 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="04aca68c-f19d-46d3-a950-92b0c5aec127" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139447 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139453 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139469 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c36c132-bd25-4067-af91-a4ae33514875" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139474 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c36c132-bd25-4067-af91-a4ae33514875" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139488 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcb7120-12b3-4b62-9da4-e8c44e8a3567" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139494 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcb7120-12b3-4b62-9da4-e8c44e8a3567" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139505 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="dnsmasq-dns" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139512 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="dnsmasq-dns" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139522 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139527 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: E0314 05:52:00.139536 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8099b872-793e-4e42-816d-a6ca9b72624c" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139542 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8099b872-793e-4e42-816d-a6ca9b72624c" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139717 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c36c132-bd25-4067-af91-a4ae33514875" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139733 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcb7120-12b3-4b62-9da4-e8c44e8a3567" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139743 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139755 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" containerName="mariadb-account-create-update" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139771 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8099b872-793e-4e42-816d-a6ca9b72624c" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139785 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="04aca68c-f19d-46d3-a950-92b0c5aec127" containerName="mariadb-database-create" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.139794 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e76627-0df6-400a-981f-8672983e6741" containerName="dnsmasq-dns" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.140453 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.144495 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.144636 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.144959 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.152606 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557792-b8788"] Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.221959 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvz74\" (UniqueName: \"kubernetes.io/projected/7526535b-7dd6-4ccb-837d-121b93caeb5c-kube-api-access-xvz74\") pod \"auto-csr-approver-29557792-b8788\" (UID: \"7526535b-7dd6-4ccb-837d-121b93caeb5c\") " pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.324458 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvz74\" (UniqueName: \"kubernetes.io/projected/7526535b-7dd6-4ccb-837d-121b93caeb5c-kube-api-access-xvz74\") pod \"auto-csr-approver-29557792-b8788\" (UID: \"7526535b-7dd6-4ccb-837d-121b93caeb5c\") " pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.345172 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvz74\" (UniqueName: \"kubernetes.io/projected/7526535b-7dd6-4ccb-837d-121b93caeb5c-kube-api-access-xvz74\") pod \"auto-csr-approver-29557792-b8788\" (UID: \"7526535b-7dd6-4ccb-837d-121b93caeb5c\") " pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.460693 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:00 crc kubenswrapper[4817]: I0314 05:52:00.868520 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557792-b8788"] Mar 14 05:52:01 crc kubenswrapper[4817]: I0314 05:52:01.299499 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557792-b8788" event={"ID":"7526535b-7dd6-4ccb-837d-121b93caeb5c","Type":"ContainerStarted","Data":"01a864fe347ae5d4e398184e2816f75fc708d64a38dbfddb1403a0cc88bb9379"} Mar 14 05:52:03 crc kubenswrapper[4817]: I0314 05:52:03.319292 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557792-b8788" event={"ID":"7526535b-7dd6-4ccb-837d-121b93caeb5c","Type":"ContainerStarted","Data":"d5576762a85178e95178c4ea78b9c9a133b609d5b87ee196d90ce59dac6e1bf5"} Mar 14 05:52:03 crc kubenswrapper[4817]: I0314 05:52:03.341560 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557792-b8788" podStartSLOduration=1.5104867899999999 podStartE2EDuration="3.341535919s" podCreationTimestamp="2026-03-14 05:52:00 +0000 UTC" firstStartedPulling="2026-03-14 05:52:00.880887506 +0000 UTC m=+1174.919148252" lastFinishedPulling="2026-03-14 05:52:02.711936635 +0000 UTC m=+1176.750197381" observedRunningTime="2026-03-14 05:52:03.334213978 +0000 UTC m=+1177.372474724" watchObservedRunningTime="2026-03-14 05:52:03.341535919 +0000 UTC m=+1177.379796655" Mar 14 05:52:04 crc kubenswrapper[4817]: I0314 05:52:04.329749 4817 generic.go:334] "Generic (PLEG): container finished" podID="7526535b-7dd6-4ccb-837d-121b93caeb5c" containerID="d5576762a85178e95178c4ea78b9c9a133b609d5b87ee196d90ce59dac6e1bf5" exitCode=0 Mar 14 05:52:04 crc kubenswrapper[4817]: I0314 05:52:04.329832 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557792-b8788" event={"ID":"7526535b-7dd6-4ccb-837d-121b93caeb5c","Type":"ContainerDied","Data":"d5576762a85178e95178c4ea78b9c9a133b609d5b87ee196d90ce59dac6e1bf5"} Mar 14 05:52:05 crc kubenswrapper[4817]: I0314 05:52:05.624815 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:05 crc kubenswrapper[4817]: I0314 05:52:05.896289 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvz74\" (UniqueName: \"kubernetes.io/projected/7526535b-7dd6-4ccb-837d-121b93caeb5c-kube-api-access-xvz74\") pod \"7526535b-7dd6-4ccb-837d-121b93caeb5c\" (UID: \"7526535b-7dd6-4ccb-837d-121b93caeb5c\") " Mar 14 05:52:05 crc kubenswrapper[4817]: I0314 05:52:05.902347 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7526535b-7dd6-4ccb-837d-121b93caeb5c-kube-api-access-xvz74" (OuterVolumeSpecName: "kube-api-access-xvz74") pod "7526535b-7dd6-4ccb-837d-121b93caeb5c" (UID: "7526535b-7dd6-4ccb-837d-121b93caeb5c"). InnerVolumeSpecName "kube-api-access-xvz74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:52:06 crc kubenswrapper[4817]: I0314 05:52:05.999604 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvz74\" (UniqueName: \"kubernetes.io/projected/7526535b-7dd6-4ccb-837d-121b93caeb5c-kube-api-access-xvz74\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:06 crc kubenswrapper[4817]: I0314 05:52:06.355173 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557792-b8788" event={"ID":"7526535b-7dd6-4ccb-837d-121b93caeb5c","Type":"ContainerDied","Data":"01a864fe347ae5d4e398184e2816f75fc708d64a38dbfddb1403a0cc88bb9379"} Mar 14 05:52:06 crc kubenswrapper[4817]: I0314 05:52:06.355217 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a864fe347ae5d4e398184e2816f75fc708d64a38dbfddb1403a0cc88bb9379" Mar 14 05:52:06 crc kubenswrapper[4817]: I0314 05:52:06.355279 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557792-b8788" Mar 14 05:52:07 crc kubenswrapper[4817]: I0314 05:52:07.066535 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557786-hsfx6"] Mar 14 05:52:07 crc kubenswrapper[4817]: I0314 05:52:07.072533 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557786-hsfx6"] Mar 14 05:52:08 crc kubenswrapper[4817]: I0314 05:52:08.746347 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7b6e7d-82c3-47d3-a668-15f2614066dd" path="/var/lib/kubelet/pods/7a7b6e7d-82c3-47d3-a668-15f2614066dd/volumes" Mar 14 05:52:14 crc kubenswrapper[4817]: I0314 05:52:14.438295 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-856gn" event={"ID":"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb","Type":"ContainerStarted","Data":"d82ea0da8426f784499c3eeb032081c3c29cfb8534ea5d19f7aeec83515cc006"} Mar 14 05:52:14 crc kubenswrapper[4817]: I0314 05:52:14.460860 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-856gn" podStartSLOduration=2.836282671 podStartE2EDuration="31.460844062s" podCreationTimestamp="2026-03-14 05:51:43 +0000 UTC" firstStartedPulling="2026-03-14 05:51:44.803840564 +0000 UTC m=+1158.842101310" lastFinishedPulling="2026-03-14 05:52:13.428401955 +0000 UTC m=+1187.466662701" observedRunningTime="2026-03-14 05:52:14.453867372 +0000 UTC m=+1188.492128118" watchObservedRunningTime="2026-03-14 05:52:14.460844062 +0000 UTC m=+1188.499104808" Mar 14 05:52:39 crc kubenswrapper[4817]: I0314 05:52:39.753224 4817 generic.go:334] "Generic (PLEG): container finished" podID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" containerID="d82ea0da8426f784499c3eeb032081c3c29cfb8534ea5d19f7aeec83515cc006" exitCode=0 Mar 14 05:52:39 crc kubenswrapper[4817]: I0314 05:52:39.753327 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-856gn" event={"ID":"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb","Type":"ContainerDied","Data":"d82ea0da8426f784499c3eeb032081c3c29cfb8534ea5d19f7aeec83515cc006"} Mar 14 05:52:40 crc kubenswrapper[4817]: I0314 05:52:40.075149 4817 scope.go:117] "RemoveContainer" containerID="41a35aa41d57554c853a735456076d0db386f20a189879575d689ee8ecf206f3" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.064819 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-856gn" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.097677 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-combined-ca-bundle\") pod \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.097915 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zhtl\" (UniqueName: \"kubernetes.io/projected/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-kube-api-access-8zhtl\") pod \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.097987 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-config-data\") pod \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\" (UID: \"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb\") " Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.103295 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-kube-api-access-8zhtl" (OuterVolumeSpecName: "kube-api-access-8zhtl") pod "09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" (UID: "09b59299-5f4c-41d2-ad05-2d43b0c0cbfb"). InnerVolumeSpecName "kube-api-access-8zhtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.121879 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" (UID: "09b59299-5f4c-41d2-ad05-2d43b0c0cbfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.142047 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-config-data" (OuterVolumeSpecName: "config-data") pod "09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" (UID: "09b59299-5f4c-41d2-ad05-2d43b0c0cbfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.199882 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zhtl\" (UniqueName: \"kubernetes.io/projected/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-kube-api-access-8zhtl\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.199943 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.199956 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.766527 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-856gn" event={"ID":"09b59299-5f4c-41d2-ad05-2d43b0c0cbfb","Type":"ContainerDied","Data":"e20c92256084579a6e1a588f85dec4d64bb0ac2c2ce4a484b0b752f9a2415963"} Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.766563 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20c92256084579a6e1a588f85dec4d64bb0ac2c2ce4a484b0b752f9a2415963" Mar 14 05:52:41 crc kubenswrapper[4817]: I0314 05:52:41.766570 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-856gn" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.060762 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-2gkmd"] Mar 14 05:52:42 crc kubenswrapper[4817]: E0314 05:52:42.061361 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7526535b-7dd6-4ccb-837d-121b93caeb5c" containerName="oc" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.061386 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7526535b-7dd6-4ccb-837d-121b93caeb5c" containerName="oc" Mar 14 05:52:42 crc kubenswrapper[4817]: E0314 05:52:42.061404 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" containerName="keystone-db-sync" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.061412 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" containerName="keystone-db-sync" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.061654 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" containerName="keystone-db-sync" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.061686 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7526535b-7dd6-4ccb-837d-121b93caeb5c" containerName="oc" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.062909 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.082755 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nljmq"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.084430 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.089502 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.090198 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.090341 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.090488 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r7ltc" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.091119 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.095328 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-2gkmd"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.107275 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nljmq"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117481 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117548 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117628 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2zt\" (UniqueName: \"kubernetes.io/projected/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-kube-api-access-lx2zt\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117677 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-config-data\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117700 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-scripts\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117719 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-config\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117746 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p456l\" (UniqueName: \"kubernetes.io/projected/085bd223-0737-4c1e-9769-02b36e20796b-kube-api-access-p456l\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117807 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-credential-keys\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117838 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-combined-ca-bundle\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.117867 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-fernet-keys\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.120072 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-dns-svc\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.221998 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222383 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222437 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2zt\" (UniqueName: \"kubernetes.io/projected/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-kube-api-access-lx2zt\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222477 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-config-data\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222499 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-scripts\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222522 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-config\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222550 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p456l\" (UniqueName: \"kubernetes.io/projected/085bd223-0737-4c1e-9769-02b36e20796b-kube-api-access-p456l\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222602 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-credential-keys\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222625 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-combined-ca-bundle\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222652 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-fernet-keys\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.222714 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-dns-svc\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.223771 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-dns-svc\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.224438 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.226391 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-config\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.227126 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.230648 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-combined-ca-bundle\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.236297 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-scripts\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.236660 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-credential-keys\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.239499 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-config-data\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.244367 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-fernet-keys\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.250912 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2zt\" (UniqueName: \"kubernetes.io/projected/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-kube-api-access-lx2zt\") pod \"dnsmasq-dns-67795cd9-2gkmd\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.254538 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p456l\" (UniqueName: \"kubernetes.io/projected/085bd223-0737-4c1e-9769-02b36e20796b-kube-api-access-p456l\") pod \"keystone-bootstrap-nljmq\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.290021 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.292834 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.301437 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.301696 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.316129 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.366563 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fbf7q"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.367532 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.370925 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.371258 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9rpn5" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.371488 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.377325 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fbf7q"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.391387 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.423064 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427005 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb48g\" (UniqueName: \"kubernetes.io/projected/55cdf8b8-c7aa-40df-b968-24656d19c55c-kube-api-access-mb48g\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427055 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427090 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-scripts\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427121 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-run-httpd\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427141 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-combined-ca-bundle\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427165 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-config\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427184 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-config-data\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427209 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427229 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55l57\" (UniqueName: \"kubernetes.io/projected/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-kube-api-access-55l57\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.427268 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-log-httpd\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.453975 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-z7ltl"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.455759 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.459528 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6wpmz" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.459786 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.460324 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.461979 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-z7ltl"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.489987 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q8744"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.491625 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.506812 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.506992 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.507165 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-t6qlq" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529037 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-scripts\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529099 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-run-httpd\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529133 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-combined-ca-bundle\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529160 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69s7m\" (UniqueName: \"kubernetes.io/projected/a0e16259-a87f-4bb8-8fa1-5ee63129e195-kube-api-access-69s7m\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529189 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-config\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529210 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-config-data\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529230 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529251 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-combined-ca-bundle\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529270 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-config-data\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529292 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55l57\" (UniqueName: \"kubernetes.io/projected/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-kube-api-access-55l57\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529333 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59388585-c2da-4111-ad61-aacdf15612aa-logs\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529356 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-log-httpd\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529376 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-combined-ca-bundle\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529400 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0e16259-a87f-4bb8-8fa1-5ee63129e195-etc-machine-id\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529430 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-scripts\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529462 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb48g\" (UniqueName: \"kubernetes.io/projected/55cdf8b8-c7aa-40df-b968-24656d19c55c-kube-api-access-mb48g\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529487 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-db-sync-config-data\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529509 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529540 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-config-data\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529561 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srv5h\" (UniqueName: \"kubernetes.io/projected/59388585-c2da-4111-ad61-aacdf15612aa-kube-api-access-srv5h\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.529582 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-scripts\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.537282 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-run-httpd\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.538248 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-log-httpd\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.555953 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-combined-ca-bundle\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.558502 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-scripts\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.576460 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55l57\" (UniqueName: \"kubernetes.io/projected/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-kube-api-access-55l57\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.579536 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.580598 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q8744"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.583825 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-config\") pod \"neutron-db-sync-fbf7q\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.595298 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.596589 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-config-data\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.604043 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb48g\" (UniqueName: \"kubernetes.io/projected/55cdf8b8-c7aa-40df-b968-24656d19c55c-kube-api-access-mb48g\") pod \"ceilometer-0\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.624534 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-znzjw"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.625926 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.626497 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.634408 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.638004 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xjvwd" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.644929 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-scripts\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.658420 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-db-sync-config-data\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.658592 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-config-data\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.658647 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srv5h\" (UniqueName: \"kubernetes.io/projected/59388585-c2da-4111-ad61-aacdf15612aa-kube-api-access-srv5h\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.659022 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-scripts\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.661516 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-scripts\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.661777 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69s7m\" (UniqueName: \"kubernetes.io/projected/a0e16259-a87f-4bb8-8fa1-5ee63129e195-kube-api-access-69s7m\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.661999 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-combined-ca-bundle\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.662157 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-config-data\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.662480 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59388585-c2da-4111-ad61-aacdf15612aa-logs\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.662656 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-combined-ca-bundle\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.662824 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0e16259-a87f-4bb8-8fa1-5ee63129e195-etc-machine-id\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.663212 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0e16259-a87f-4bb8-8fa1-5ee63129e195-etc-machine-id\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.666911 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59388585-c2da-4111-ad61-aacdf15612aa-logs\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.674146 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-db-sync-config-data\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.680006 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-config-data\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.682245 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-combined-ca-bundle\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.687404 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-config-data\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.690048 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-scripts\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.690657 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.707488 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69s7m\" (UniqueName: \"kubernetes.io/projected/a0e16259-a87f-4bb8-8fa1-5ee63129e195-kube-api-access-69s7m\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.711553 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srv5h\" (UniqueName: \"kubernetes.io/projected/59388585-c2da-4111-ad61-aacdf15612aa-kube-api-access-srv5h\") pod \"placement-db-sync-q8744\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.717300 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-combined-ca-bundle\") pod \"cinder-db-sync-z7ltl\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.764959 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-db-sync-config-data\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.765372 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-combined-ca-bundle\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.765440 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpp2q\" (UniqueName: \"kubernetes.io/projected/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-kube-api-access-rpp2q\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.800983 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-2gkmd"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.804076 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-znzjw"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.815153 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.816677 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.824461 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8"] Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.868908 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-combined-ca-bundle\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.868976 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.869009 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsqd\" (UniqueName: \"kubernetes.io/projected/4db56aa3-7556-4a5f-89db-662a8acf5948-kube-api-access-dmsqd\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.869041 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpp2q\" (UniqueName: \"kubernetes.io/projected/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-kube-api-access-rpp2q\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.869132 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.869155 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-db-sync-config-data\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.869179 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.869316 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-config\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.876704 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-db-sync-config-data\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.890084 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-combined-ca-bundle\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.894723 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpp2q\" (UniqueName: \"kubernetes.io/projected/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-kube-api-access-rpp2q\") pod \"barbican-db-sync-znzjw\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.910082 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.922046 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q8744" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.971598 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.971656 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.971696 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-config\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.971749 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.971776 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsqd\" (UniqueName: \"kubernetes.io/projected/4db56aa3-7556-4a5f-89db-662a8acf5948-kube-api-access-dmsqd\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.973400 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.973487 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-config\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.975133 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.977282 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znzjw" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.987224 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:42 crc kubenswrapper[4817]: I0314 05:52:42.992454 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsqd\" (UniqueName: \"kubernetes.io/projected/4db56aa3-7556-4a5f-89db-662a8acf5948-kube-api-access-dmsqd\") pod \"dnsmasq-dns-5b6dbdb6f5-xgmf8\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.117267 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-2gkmd"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.148811 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.177788 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nljmq"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.305522 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fbf7q"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.337515 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.477170 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q8744"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.601715 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-znzjw"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.683707 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-z7ltl"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.798225 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8"] Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.799073 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nljmq" event={"ID":"085bd223-0737-4c1e-9769-02b36e20796b","Type":"ContainerStarted","Data":"d8d99ca6b8613994427f3e2d1769d7a5c531f5e64ad2a06a7e2177c95fcbc1a1"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.799130 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nljmq" event={"ID":"085bd223-0737-4c1e-9769-02b36e20796b","Type":"ContainerStarted","Data":"bc5d07eee21301fe190cc041601809ab6b2d12aac56691a83e82afef71030b44"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.806372 4817 generic.go:334] "Generic (PLEG): container finished" podID="01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" containerID="3145fbbc929edbaaf07f536ac61bcab22d9326eb791be98084fb36515cdf8995" exitCode=0 Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.806475 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" event={"ID":"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729","Type":"ContainerDied","Data":"3145fbbc929edbaaf07f536ac61bcab22d9326eb791be98084fb36515cdf8995"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.806502 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" event={"ID":"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729","Type":"ContainerStarted","Data":"7830db4000bf01ad55e1c9ce8d3df129a4b1f571d20d5db79468bd83a5784d24"} Mar 14 05:52:43 crc kubenswrapper[4817]: W0314 05:52:43.812712 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4db56aa3_7556_4a5f_89db_662a8acf5948.slice/crio-4ccc3b7d1ecd7d55b8ae5c31a7bb63711204653c56eecb9f7a0d5d11b1001ca5 WatchSource:0}: Error finding container 4ccc3b7d1ecd7d55b8ae5c31a7bb63711204653c56eecb9f7a0d5d11b1001ca5: Status 404 returned error can't find the container with id 4ccc3b7d1ecd7d55b8ae5c31a7bb63711204653c56eecb9f7a0d5d11b1001ca5 Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.826071 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znzjw" event={"ID":"b1f8ace2-60bf-4cb4-b473-a92c7860b5af","Type":"ContainerStarted","Data":"4f98f79defc2bdc43eb2c70b6dd28c940fa7002eb97ecf9b665780e68763c46f"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.838499 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nljmq" podStartSLOduration=1.838477937 podStartE2EDuration="1.838477937s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:52:43.831213878 +0000 UTC m=+1217.869474624" watchObservedRunningTime="2026-03-14 05:52:43.838477937 +0000 UTC m=+1217.876738683" Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.845438 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbf7q" event={"ID":"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3","Type":"ContainerStarted","Data":"bbe42d57d935818f316d61b98ac093be210893b0c901fafc039d564e1e065f6d"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.845479 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbf7q" event={"ID":"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3","Type":"ContainerStarted","Data":"cc08a400f5fe4531b9fddb129b266a071eb5de9a27070635bf9ff33133de8f75"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.865155 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z7ltl" event={"ID":"a0e16259-a87f-4bb8-8fa1-5ee63129e195","Type":"ContainerStarted","Data":"320e61058bcd1471869cd8780efe8c1ed5efad09b5cfab7bc73b4081b1d0eaa5"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.880476 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q8744" event={"ID":"59388585-c2da-4111-ad61-aacdf15612aa","Type":"ContainerStarted","Data":"b673010e274b05cd6d89466a42aea582b91294e2c4f79da974d422922dff7301"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.882473 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerStarted","Data":"3d54dd6ef7a5844273becdfbe642c09456e0c464d950a5f7235ffa1de13d806b"} Mar 14 05:52:43 crc kubenswrapper[4817]: I0314 05:52:43.896647 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fbf7q" podStartSLOduration=1.896628149 podStartE2EDuration="1.896628149s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:52:43.896328001 +0000 UTC m=+1217.934588737" watchObservedRunningTime="2026-03-14 05:52:43.896628149 +0000 UTC m=+1217.934888895" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.190159 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.325437 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-dns-svc\") pod \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.325659 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-nb\") pod \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.325686 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-config\") pod \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.325717 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-sb\") pod \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.325747 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx2zt\" (UniqueName: \"kubernetes.io/projected/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-kube-api-access-lx2zt\") pod \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\" (UID: \"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729\") " Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.350277 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" (UID: "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.351322 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-kube-api-access-lx2zt" (OuterVolumeSpecName: "kube-api-access-lx2zt") pod "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" (UID: "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729"). InnerVolumeSpecName "kube-api-access-lx2zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.365117 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" (UID: "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.391431 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.392146 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" (UID: "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.411414 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-config" (OuterVolumeSpecName: "config") pod "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" (UID: "01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.429261 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.429295 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.429310 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.429321 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.429336 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx2zt\" (UniqueName: \"kubernetes.io/projected/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729-kube-api-access-lx2zt\") on node \"crc\" DevicePath \"\"" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.905116 4817 generic.go:334] "Generic (PLEG): container finished" podID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerID="d4b99dfd9cb1ec78170452b9c2d88cba768dff71984563d847cf45d2fa8f06ad" exitCode=0 Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.905227 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" event={"ID":"4db56aa3-7556-4a5f-89db-662a8acf5948","Type":"ContainerDied","Data":"d4b99dfd9cb1ec78170452b9c2d88cba768dff71984563d847cf45d2fa8f06ad"} Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.905261 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" event={"ID":"4db56aa3-7556-4a5f-89db-662a8acf5948","Type":"ContainerStarted","Data":"4ccc3b7d1ecd7d55b8ae5c31a7bb63711204653c56eecb9f7a0d5d11b1001ca5"} Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.915711 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.916362 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-2gkmd" event={"ID":"01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729","Type":"ContainerDied","Data":"7830db4000bf01ad55e1c9ce8d3df129a4b1f571d20d5db79468bd83a5784d24"} Mar 14 05:52:44 crc kubenswrapper[4817]: I0314 05:52:44.916403 4817 scope.go:117] "RemoveContainer" containerID="3145fbbc929edbaaf07f536ac61bcab22d9326eb791be98084fb36515cdf8995" Mar 14 05:52:45 crc kubenswrapper[4817]: I0314 05:52:45.029861 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-2gkmd"] Mar 14 05:52:45 crc kubenswrapper[4817]: I0314 05:52:45.041472 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-2gkmd"] Mar 14 05:52:45 crc kubenswrapper[4817]: I0314 05:52:45.928108 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" event={"ID":"4db56aa3-7556-4a5f-89db-662a8acf5948","Type":"ContainerStarted","Data":"580461da1329fd7332858cbec4d6656f5677926b7acfea0a86d8e109bc17c2f8"} Mar 14 05:52:45 crc kubenswrapper[4817]: I0314 05:52:45.928271 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:45 crc kubenswrapper[4817]: I0314 05:52:45.951049 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" podStartSLOduration=3.951021491 podStartE2EDuration="3.951021491s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:52:45.95099135 +0000 UTC m=+1219.989252116" watchObservedRunningTime="2026-03-14 05:52:45.951021491 +0000 UTC m=+1219.989282237" Mar 14 05:52:46 crc kubenswrapper[4817]: I0314 05:52:46.760272 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" path="/var/lib/kubelet/pods/01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729/volumes" Mar 14 05:52:47 crc kubenswrapper[4817]: I0314 05:52:47.951227 4817 generic.go:334] "Generic (PLEG): container finished" podID="085bd223-0737-4c1e-9769-02b36e20796b" containerID="d8d99ca6b8613994427f3e2d1769d7a5c531f5e64ad2a06a7e2177c95fcbc1a1" exitCode=0 Mar 14 05:52:47 crc kubenswrapper[4817]: I0314 05:52:47.951344 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nljmq" event={"ID":"085bd223-0737-4c1e-9769-02b36e20796b","Type":"ContainerDied","Data":"d8d99ca6b8613994427f3e2d1769d7a5c531f5e64ad2a06a7e2177c95fcbc1a1"} Mar 14 05:52:53 crc kubenswrapper[4817]: I0314 05:52:53.151431 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:52:53 crc kubenswrapper[4817]: I0314 05:52:53.214505 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-bvsj2"] Mar 14 05:52:53 crc kubenswrapper[4817]: I0314 05:52:53.214748 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" containerID="cri-o://a481e1f27ab921f9b3d1a21d47f880a5a26f5864e8c43e5dfb08a8519a2de2e9" gracePeriod=10 Mar 14 05:52:53 crc kubenswrapper[4817]: E0314 05:52:53.529110 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e2bf07_c945_4dd9_a5d7_dd451a807bbc.slice/crio-conmon-a481e1f27ab921f9b3d1a21d47f880a5a26f5864e8c43e5dfb08a8519a2de2e9.scope\": RecentStats: unable to find data in memory cache]" Mar 14 05:52:54 crc kubenswrapper[4817]: I0314 05:52:54.015812 4817 generic.go:334] "Generic (PLEG): container finished" podID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerID="a481e1f27ab921f9b3d1a21d47f880a5a26f5864e8c43e5dfb08a8519a2de2e9" exitCode=0 Mar 14 05:52:54 crc kubenswrapper[4817]: I0314 05:52:54.015930 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" event={"ID":"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc","Type":"ContainerDied","Data":"a481e1f27ab921f9b3d1a21d47f880a5a26f5864e8c43e5dfb08a8519a2de2e9"} Mar 14 05:52:56 crc kubenswrapper[4817]: I0314 05:52:56.695324 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Mar 14 05:53:01 crc kubenswrapper[4817]: I0314 05:53:01.695153 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.578688 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.657409 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-combined-ca-bundle\") pod \"085bd223-0737-4c1e-9769-02b36e20796b\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.657536 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-fernet-keys\") pod \"085bd223-0737-4c1e-9769-02b36e20796b\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.657571 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-credential-keys\") pod \"085bd223-0737-4c1e-9769-02b36e20796b\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.657614 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-config-data\") pod \"085bd223-0737-4c1e-9769-02b36e20796b\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.657638 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p456l\" (UniqueName: \"kubernetes.io/projected/085bd223-0737-4c1e-9769-02b36e20796b-kube-api-access-p456l\") pod \"085bd223-0737-4c1e-9769-02b36e20796b\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.657682 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-scripts\") pod \"085bd223-0737-4c1e-9769-02b36e20796b\" (UID: \"085bd223-0737-4c1e-9769-02b36e20796b\") " Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.665026 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "085bd223-0737-4c1e-9769-02b36e20796b" (UID: "085bd223-0737-4c1e-9769-02b36e20796b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.668125 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085bd223-0737-4c1e-9769-02b36e20796b-kube-api-access-p456l" (OuterVolumeSpecName: "kube-api-access-p456l") pod "085bd223-0737-4c1e-9769-02b36e20796b" (UID: "085bd223-0737-4c1e-9769-02b36e20796b"). InnerVolumeSpecName "kube-api-access-p456l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.668256 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "085bd223-0737-4c1e-9769-02b36e20796b" (UID: "085bd223-0737-4c1e-9769-02b36e20796b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.693200 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-scripts" (OuterVolumeSpecName: "scripts") pod "085bd223-0737-4c1e-9769-02b36e20796b" (UID: "085bd223-0737-4c1e-9769-02b36e20796b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:03 crc kubenswrapper[4817]: E0314 05:53:03.708498 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Mar 14 05:53:03 crc kubenswrapper[4817]: E0314 05:53:03.708767 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srv5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-q8744_openstack(59388585-c2da-4111-ad61-aacdf15612aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:53:03 crc kubenswrapper[4817]: E0314 05:53:03.710002 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-q8744" podUID="59388585-c2da-4111-ad61-aacdf15612aa" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.710250 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-config-data" (OuterVolumeSpecName: "config-data") pod "085bd223-0737-4c1e-9769-02b36e20796b" (UID: "085bd223-0737-4c1e-9769-02b36e20796b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.710518 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085bd223-0737-4c1e-9769-02b36e20796b" (UID: "085bd223-0737-4c1e-9769-02b36e20796b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.760871 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.761752 4817 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.761763 4817 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.761771 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.761779 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p456l\" (UniqueName: \"kubernetes.io/projected/085bd223-0737-4c1e-9769-02b36e20796b-kube-api-access-p456l\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:03 crc kubenswrapper[4817]: I0314 05:53:03.761790 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/085bd223-0737-4c1e-9769-02b36e20796b-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.114612 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nljmq" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.115115 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nljmq" event={"ID":"085bd223-0737-4c1e-9769-02b36e20796b","Type":"ContainerDied","Data":"bc5d07eee21301fe190cc041601809ab6b2d12aac56691a83e82afef71030b44"} Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.115171 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5d07eee21301fe190cc041601809ab6b2d12aac56691a83e82afef71030b44" Mar 14 05:53:04 crc kubenswrapper[4817]: E0314 05:53:04.116907 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-q8744" podUID="59388585-c2da-4111-ad61-aacdf15612aa" Mar 14 05:53:04 crc kubenswrapper[4817]: E0314 05:53:04.590600 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 14 05:53:04 crc kubenswrapper[4817]: E0314 05:53:04.591038 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpp2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-znzjw_openstack(b1f8ace2-60bf-4cb4-b473-a92c7860b5af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:53:04 crc kubenswrapper[4817]: E0314 05:53:04.592250 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-znzjw" podUID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.766313 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nljmq"] Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.766383 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nljmq"] Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.814089 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bgn4z"] Mar 14 05:53:04 crc kubenswrapper[4817]: E0314 05:53:04.814468 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085bd223-0737-4c1e-9769-02b36e20796b" containerName="keystone-bootstrap" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.814490 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="085bd223-0737-4c1e-9769-02b36e20796b" containerName="keystone-bootstrap" Mar 14 05:53:04 crc kubenswrapper[4817]: E0314 05:53:04.814508 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" containerName="init" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.814515 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" containerName="init" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.814655 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="085bd223-0737-4c1e-9769-02b36e20796b" containerName="keystone-bootstrap" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.814675 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d1bdfc-1abd-4b3e-9dc0-ae9d65e4c729" containerName="init" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.815224 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.817779 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.819446 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.819668 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r7ltc" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.821259 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.821508 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.833692 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bgn4z"] Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.881323 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-config-data\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.881433 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-credential-keys\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.881522 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-fernet-keys\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.881757 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-scripts\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.881909 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvdw\" (UniqueName: \"kubernetes.io/projected/8efc2969-6d96-48a4-8fc1-108da2c8f778-kube-api-access-pzvdw\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.881976 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-combined-ca-bundle\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.983695 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-fernet-keys\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.983784 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-scripts\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.983832 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvdw\" (UniqueName: \"kubernetes.io/projected/8efc2969-6d96-48a4-8fc1-108da2c8f778-kube-api-access-pzvdw\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.983863 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-combined-ca-bundle\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.984012 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-config-data\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.984055 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-credential-keys\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.990641 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-scripts\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.991022 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-combined-ca-bundle\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.991590 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-fernet-keys\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.998013 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-credential-keys\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:04 crc kubenswrapper[4817]: I0314 05:53:04.999840 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-config-data\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:05 crc kubenswrapper[4817]: I0314 05:53:05.000186 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvdw\" (UniqueName: \"kubernetes.io/projected/8efc2969-6d96-48a4-8fc1-108da2c8f778-kube-api-access-pzvdw\") pod \"keystone-bootstrap-bgn4z\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:05 crc kubenswrapper[4817]: E0314 05:53:05.123519 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-znzjw" podUID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" Mar 14 05:53:05 crc kubenswrapper[4817]: I0314 05:53:05.134840 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:06 crc kubenswrapper[4817]: I0314 05:53:06.755247 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085bd223-0737-4c1e-9769-02b36e20796b" path="/var/lib/kubelet/pods/085bd223-0737-4c1e-9769-02b36e20796b/volumes" Mar 14 05:53:08 crc kubenswrapper[4817]: I0314 05:53:08.566501 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:53:08 crc kubenswrapper[4817]: I0314 05:53:08.566872 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:53:11 crc kubenswrapper[4817]: I0314 05:53:11.695362 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Mar 14 05:53:11 crc kubenswrapper[4817]: I0314 05:53:11.696389 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.812336 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.905610 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-sb\") pod \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.906598 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-config\") pod \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.906998 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-nb\") pod \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.907170 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnzbk\" (UniqueName: \"kubernetes.io/projected/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-kube-api-access-cnzbk\") pod \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.907291 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-dns-svc\") pod \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\" (UID: \"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc\") " Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.916164 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-kube-api-access-cnzbk" (OuterVolumeSpecName: "kube-api-access-cnzbk") pod "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" (UID: "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc"). InnerVolumeSpecName "kube-api-access-cnzbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.954965 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" (UID: "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.961680 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-config" (OuterVolumeSpecName: "config") pod "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" (UID: "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.968511 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" (UID: "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:13 crc kubenswrapper[4817]: I0314 05:53:13.984319 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" (UID: "e3e2bf07-c945-4dd9-a5d7-dd451a807bbc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.010720 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.010777 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.010791 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.010808 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnzbk\" (UniqueName: \"kubernetes.io/projected/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-kube-api-access-cnzbk\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.010825 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.214590 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" event={"ID":"e3e2bf07-c945-4dd9-a5d7-dd451a807bbc","Type":"ContainerDied","Data":"83f8f08fe8ec30738a88519ae4e5a367dd8021032c3330d09046e317ca54bf2a"} Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.215130 4817 scope.go:117] "RemoveContainer" containerID="a481e1f27ab921f9b3d1a21d47f880a5a26f5864e8c43e5dfb08a8519a2de2e9" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.214659 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.266026 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-bvsj2"] Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.276847 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-bvsj2"] Mar 14 05:53:14 crc kubenswrapper[4817]: I0314 05:53:14.754704 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" path="/var/lib/kubelet/pods/e3e2bf07-c945-4dd9-a5d7-dd451a807bbc/volumes" Mar 14 05:53:15 crc kubenswrapper[4817]: I0314 05:53:15.009567 4817 scope.go:117] "RemoveContainer" containerID="d42ff316dc3dfd70aeebe6d056fcf9d108db2f43251b9f04ea3855a861d506c6" Mar 14 05:53:15 crc kubenswrapper[4817]: E0314 05:53:15.042968 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 14 05:53:15 crc kubenswrapper[4817]: E0314 05:53:15.043878 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69s7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-z7ltl_openstack(a0e16259-a87f-4bb8-8fa1-5ee63129e195): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:53:15 crc kubenswrapper[4817]: E0314 05:53:15.046020 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-z7ltl" podUID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" Mar 14 05:53:15 crc kubenswrapper[4817]: E0314 05:53:15.246460 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-z7ltl" podUID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" Mar 14 05:53:15 crc kubenswrapper[4817]: I0314 05:53:15.509922 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bgn4z"] Mar 14 05:53:15 crc kubenswrapper[4817]: W0314 05:53:15.510626 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8efc2969_6d96_48a4_8fc1_108da2c8f778.slice/crio-a7543f02b2dbca8f74d07b25e68de38829a93fc128143062b11bc4ba31d6f9fb WatchSource:0}: Error finding container a7543f02b2dbca8f74d07b25e68de38829a93fc128143062b11bc4ba31d6f9fb: Status 404 returned error can't find the container with id a7543f02b2dbca8f74d07b25e68de38829a93fc128143062b11bc4ba31d6f9fb Mar 14 05:53:16 crc kubenswrapper[4817]: I0314 05:53:16.256177 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerStarted","Data":"abf84e64294f172cd9a384f27c6d4f82ed7e69baad354375bd6d27a8b8643846"} Mar 14 05:53:16 crc kubenswrapper[4817]: I0314 05:53:16.265105 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgn4z" event={"ID":"8efc2969-6d96-48a4-8fc1-108da2c8f778","Type":"ContainerStarted","Data":"94af5f404095236199a2d3f6818c1dcd03cf3916b570b338cc950cd367b01657"} Mar 14 05:53:16 crc kubenswrapper[4817]: I0314 05:53:16.265195 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgn4z" event={"ID":"8efc2969-6d96-48a4-8fc1-108da2c8f778","Type":"ContainerStarted","Data":"a7543f02b2dbca8f74d07b25e68de38829a93fc128143062b11bc4ba31d6f9fb"} Mar 14 05:53:16 crc kubenswrapper[4817]: I0314 05:53:16.294499 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bgn4z" podStartSLOduration=12.294466525 podStartE2EDuration="12.294466525s" podCreationTimestamp="2026-03-14 05:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:16.293760504 +0000 UTC m=+1250.332021250" watchObservedRunningTime="2026-03-14 05:53:16.294466525 +0000 UTC m=+1250.332727271" Mar 14 05:53:16 crc kubenswrapper[4817]: I0314 05:53:16.696648 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-bvsj2" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Mar 14 05:53:17 crc kubenswrapper[4817]: I0314 05:53:17.286087 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerStarted","Data":"5a01047c670e96bc7795fcfea790e229df3f8280863775fc2abe539eceb99223"} Mar 14 05:53:21 crc kubenswrapper[4817]: I0314 05:53:21.339235 4817 generic.go:334] "Generic (PLEG): container finished" podID="8efc2969-6d96-48a4-8fc1-108da2c8f778" containerID="94af5f404095236199a2d3f6818c1dcd03cf3916b570b338cc950cd367b01657" exitCode=0 Mar 14 05:53:21 crc kubenswrapper[4817]: I0314 05:53:21.339320 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgn4z" event={"ID":"8efc2969-6d96-48a4-8fc1-108da2c8f778","Type":"ContainerDied","Data":"94af5f404095236199a2d3f6818c1dcd03cf3916b570b338cc950cd367b01657"} Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.719738 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.918510 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-scripts\") pod \"8efc2969-6d96-48a4-8fc1-108da2c8f778\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.918616 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-combined-ca-bundle\") pod \"8efc2969-6d96-48a4-8fc1-108da2c8f778\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.918663 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-config-data\") pod \"8efc2969-6d96-48a4-8fc1-108da2c8f778\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.918685 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-credential-keys\") pod \"8efc2969-6d96-48a4-8fc1-108da2c8f778\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.918759 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzvdw\" (UniqueName: \"kubernetes.io/projected/8efc2969-6d96-48a4-8fc1-108da2c8f778-kube-api-access-pzvdw\") pod \"8efc2969-6d96-48a4-8fc1-108da2c8f778\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.918809 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-fernet-keys\") pod \"8efc2969-6d96-48a4-8fc1-108da2c8f778\" (UID: \"8efc2969-6d96-48a4-8fc1-108da2c8f778\") " Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.928127 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8efc2969-6d96-48a4-8fc1-108da2c8f778" (UID: "8efc2969-6d96-48a4-8fc1-108da2c8f778"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.930179 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efc2969-6d96-48a4-8fc1-108da2c8f778-kube-api-access-pzvdw" (OuterVolumeSpecName: "kube-api-access-pzvdw") pod "8efc2969-6d96-48a4-8fc1-108da2c8f778" (UID: "8efc2969-6d96-48a4-8fc1-108da2c8f778"). InnerVolumeSpecName "kube-api-access-pzvdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.930373 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-scripts" (OuterVolumeSpecName: "scripts") pod "8efc2969-6d96-48a4-8fc1-108da2c8f778" (UID: "8efc2969-6d96-48a4-8fc1-108da2c8f778"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.933190 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8efc2969-6d96-48a4-8fc1-108da2c8f778" (UID: "8efc2969-6d96-48a4-8fc1-108da2c8f778"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.966298 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-config-data" (OuterVolumeSpecName: "config-data") pod "8efc2969-6d96-48a4-8fc1-108da2c8f778" (UID: "8efc2969-6d96-48a4-8fc1-108da2c8f778"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:22 crc kubenswrapper[4817]: I0314 05:53:22.991441 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8efc2969-6d96-48a4-8fc1-108da2c8f778" (UID: "8efc2969-6d96-48a4-8fc1-108da2c8f778"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.022768 4817 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.022807 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.022824 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.022847 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.022862 4817 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8efc2969-6d96-48a4-8fc1-108da2c8f778-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.022877 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzvdw\" (UniqueName: \"kubernetes.io/projected/8efc2969-6d96-48a4-8fc1-108da2c8f778-kube-api-access-pzvdw\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.358859 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znzjw" event={"ID":"b1f8ace2-60bf-4cb4-b473-a92c7860b5af","Type":"ContainerStarted","Data":"5f2efe95418943fd1d398d3d751fd55bbdc83ff0337222a34f5cfca8c98cd219"} Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.361937 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q8744" event={"ID":"59388585-c2da-4111-ad61-aacdf15612aa","Type":"ContainerStarted","Data":"bd3c4dff18234cb6cf97fd9662d40682c8c72033c894a3c68cab09c778f9ff77"} Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.365161 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerStarted","Data":"9065f790d542f8e8d4055f03a24a1c0b11fcd75f603fa58481507eadd44ee8dd"} Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.367246 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bgn4z" event={"ID":"8efc2969-6d96-48a4-8fc1-108da2c8f778","Type":"ContainerDied","Data":"a7543f02b2dbca8f74d07b25e68de38829a93fc128143062b11bc4ba31d6f9fb"} Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.367277 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7543f02b2dbca8f74d07b25e68de38829a93fc128143062b11bc4ba31d6f9fb" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.367326 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bgn4z" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.380474 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-znzjw" podStartSLOduration=2.260243885 podStartE2EDuration="41.380443255s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="2026-03-14 05:52:43.62295714 +0000 UTC m=+1217.661217886" lastFinishedPulling="2026-03-14 05:53:22.74315651 +0000 UTC m=+1256.781417256" observedRunningTime="2026-03-14 05:53:23.376685407 +0000 UTC m=+1257.414946163" watchObservedRunningTime="2026-03-14 05:53:23.380443255 +0000 UTC m=+1257.418704001" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.439244 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q8744" podStartSLOduration=2.201948049 podStartE2EDuration="41.439212345s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="2026-03-14 05:52:43.499876191 +0000 UTC m=+1217.538136937" lastFinishedPulling="2026-03-14 05:53:22.737140487 +0000 UTC m=+1256.775401233" observedRunningTime="2026-03-14 05:53:23.414978038 +0000 UTC m=+1257.453238784" watchObservedRunningTime="2026-03-14 05:53:23.439212345 +0000 UTC m=+1257.477473091" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.500820 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dc6f976cd-xr97w"] Mar 14 05:53:23 crc kubenswrapper[4817]: E0314 05:53:23.501206 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efc2969-6d96-48a4-8fc1-108da2c8f778" containerName="keystone-bootstrap" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.501224 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efc2969-6d96-48a4-8fc1-108da2c8f778" containerName="keystone-bootstrap" Mar 14 05:53:23 crc kubenswrapper[4817]: E0314 05:53:23.501250 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="init" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.501257 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="init" Mar 14 05:53:23 crc kubenswrapper[4817]: E0314 05:53:23.501273 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.501279 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.501423 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efc2969-6d96-48a4-8fc1-108da2c8f778" containerName="keystone-bootstrap" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.501454 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e2bf07-c945-4dd9-a5d7-dd451a807bbc" containerName="dnsmasq-dns" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.502010 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.504876 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.505032 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.505394 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.505541 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r7ltc" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.506041 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.510031 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.539431 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-credential-keys\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.539692 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-config-data\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.539818 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-public-tls-certs\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.539918 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-fernet-keys\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.540178 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-internal-tls-certs\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.540243 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpl64\" (UniqueName: \"kubernetes.io/projected/bfd749ee-b04f-45eb-8a54-2594c1d4378f-kube-api-access-hpl64\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.540703 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-combined-ca-bundle\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.540815 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-scripts\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.568943 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc6f976cd-xr97w"] Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643112 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-internal-tls-certs\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643187 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpl64\" (UniqueName: \"kubernetes.io/projected/bfd749ee-b04f-45eb-8a54-2594c1d4378f-kube-api-access-hpl64\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643277 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-combined-ca-bundle\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643307 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-scripts\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643361 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-credential-keys\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643415 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-config-data\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643462 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-public-tls-certs\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.643503 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-fernet-keys\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.650858 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-fernet-keys\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.661202 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-internal-tls-certs\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.663137 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-credential-keys\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.665416 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-config-data\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.669231 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-scripts\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.671603 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-combined-ca-bundle\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.672230 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfd749ee-b04f-45eb-8a54-2594c1d4378f-public-tls-certs\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.673035 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpl64\" (UniqueName: \"kubernetes.io/projected/bfd749ee-b04f-45eb-8a54-2594c1d4378f-kube-api-access-hpl64\") pod \"keystone-5dc6f976cd-xr97w\" (UID: \"bfd749ee-b04f-45eb-8a54-2594c1d4378f\") " pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:23 crc kubenswrapper[4817]: I0314 05:53:23.828463 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:24 crc kubenswrapper[4817]: I0314 05:53:24.381172 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc6f976cd-xr97w"] Mar 14 05:53:24 crc kubenswrapper[4817]: W0314 05:53:24.392779 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd749ee_b04f_45eb_8a54_2594c1d4378f.slice/crio-6129415c4e5a06a5c527747c5c801a322da529c6ec7d98ab0f57c0e05f5169b9 WatchSource:0}: Error finding container 6129415c4e5a06a5c527747c5c801a322da529c6ec7d98ab0f57c0e05f5169b9: Status 404 returned error can't find the container with id 6129415c4e5a06a5c527747c5c801a322da529c6ec7d98ab0f57c0e05f5169b9 Mar 14 05:53:25 crc kubenswrapper[4817]: I0314 05:53:25.398652 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc6f976cd-xr97w" event={"ID":"bfd749ee-b04f-45eb-8a54-2594c1d4378f","Type":"ContainerStarted","Data":"a4d4a189a54a169ab141a57898509a9612fea43cee50d4eecb13a2b4457309bd"} Mar 14 05:53:25 crc kubenswrapper[4817]: I0314 05:53:25.399008 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc6f976cd-xr97w" event={"ID":"bfd749ee-b04f-45eb-8a54-2594c1d4378f","Type":"ContainerStarted","Data":"6129415c4e5a06a5c527747c5c801a322da529c6ec7d98ab0f57c0e05f5169b9"} Mar 14 05:53:25 crc kubenswrapper[4817]: I0314 05:53:25.399278 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:25 crc kubenswrapper[4817]: I0314 05:53:25.445781 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dc6f976cd-xr97w" podStartSLOduration=2.44575891 podStartE2EDuration="2.44575891s" podCreationTimestamp="2026-03-14 05:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:25.438606945 +0000 UTC m=+1259.476867691" watchObservedRunningTime="2026-03-14 05:53:25.44575891 +0000 UTC m=+1259.484019656" Mar 14 05:53:27 crc kubenswrapper[4817]: I0314 05:53:27.488865 4817 generic.go:334] "Generic (PLEG): container finished" podID="59388585-c2da-4111-ad61-aacdf15612aa" containerID="bd3c4dff18234cb6cf97fd9662d40682c8c72033c894a3c68cab09c778f9ff77" exitCode=0 Mar 14 05:53:27 crc kubenswrapper[4817]: I0314 05:53:27.488948 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q8744" event={"ID":"59388585-c2da-4111-ad61-aacdf15612aa","Type":"ContainerDied","Data":"bd3c4dff18234cb6cf97fd9662d40682c8c72033c894a3c68cab09c778f9ff77"} Mar 14 05:53:30 crc kubenswrapper[4817]: I0314 05:53:30.526813 4817 generic.go:334] "Generic (PLEG): container finished" podID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" containerID="5f2efe95418943fd1d398d3d751fd55bbdc83ff0337222a34f5cfca8c98cd219" exitCode=0 Mar 14 05:53:30 crc kubenswrapper[4817]: I0314 05:53:30.526924 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znzjw" event={"ID":"b1f8ace2-60bf-4cb4-b473-a92c7860b5af","Type":"ContainerDied","Data":"5f2efe95418943fd1d398d3d751fd55bbdc83ff0337222a34f5cfca8c98cd219"} Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.526271 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znzjw" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.538364 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q8744" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.571277 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znzjw" event={"ID":"b1f8ace2-60bf-4cb4-b473-a92c7860b5af","Type":"ContainerDied","Data":"4f98f79defc2bdc43eb2c70b6dd28c940fa7002eb97ecf9b665780e68763c46f"} Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.571340 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f98f79defc2bdc43eb2c70b6dd28c940fa7002eb97ecf9b665780e68763c46f" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.571423 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znzjw" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.589767 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q8744" event={"ID":"59388585-c2da-4111-ad61-aacdf15612aa","Type":"ContainerDied","Data":"b673010e274b05cd6d89466a42aea582b91294e2c4f79da974d422922dff7301"} Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.590553 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b673010e274b05cd6d89466a42aea582b91294e2c4f79da974d422922dff7301" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.590723 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q8744" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.632095 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-combined-ca-bundle\") pod \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.632149 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpp2q\" (UniqueName: \"kubernetes.io/projected/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-kube-api-access-rpp2q\") pod \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.632265 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-db-sync-config-data\") pod \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\" (UID: \"b1f8ace2-60bf-4cb4-b473-a92c7860b5af\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.640269 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-kube-api-access-rpp2q" (OuterVolumeSpecName: "kube-api-access-rpp2q") pod "b1f8ace2-60bf-4cb4-b473-a92c7860b5af" (UID: "b1f8ace2-60bf-4cb4-b473-a92c7860b5af"). InnerVolumeSpecName "kube-api-access-rpp2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.640319 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b1f8ace2-60bf-4cb4-b473-a92c7860b5af" (UID: "b1f8ace2-60bf-4cb4-b473-a92c7860b5af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.666215 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1f8ace2-60bf-4cb4-b473-a92c7860b5af" (UID: "b1f8ace2-60bf-4cb4-b473-a92c7860b5af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734055 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srv5h\" (UniqueName: \"kubernetes.io/projected/59388585-c2da-4111-ad61-aacdf15612aa-kube-api-access-srv5h\") pod \"59388585-c2da-4111-ad61-aacdf15612aa\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734148 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-scripts\") pod \"59388585-c2da-4111-ad61-aacdf15612aa\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734188 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-combined-ca-bundle\") pod \"59388585-c2da-4111-ad61-aacdf15612aa\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734282 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-config-data\") pod \"59388585-c2da-4111-ad61-aacdf15612aa\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734328 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59388585-c2da-4111-ad61-aacdf15612aa-logs\") pod \"59388585-c2da-4111-ad61-aacdf15612aa\" (UID: \"59388585-c2da-4111-ad61-aacdf15612aa\") " Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734744 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734766 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpp2q\" (UniqueName: \"kubernetes.io/projected/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-kube-api-access-rpp2q\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.734778 4817 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1f8ace2-60bf-4cb4-b473-a92c7860b5af-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.735040 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59388585-c2da-4111-ad61-aacdf15612aa-logs" (OuterVolumeSpecName: "logs") pod "59388585-c2da-4111-ad61-aacdf15612aa" (UID: "59388585-c2da-4111-ad61-aacdf15612aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.750692 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-scripts" (OuterVolumeSpecName: "scripts") pod "59388585-c2da-4111-ad61-aacdf15612aa" (UID: "59388585-c2da-4111-ad61-aacdf15612aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.754198 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59388585-c2da-4111-ad61-aacdf15612aa-kube-api-access-srv5h" (OuterVolumeSpecName: "kube-api-access-srv5h") pod "59388585-c2da-4111-ad61-aacdf15612aa" (UID: "59388585-c2da-4111-ad61-aacdf15612aa"). InnerVolumeSpecName "kube-api-access-srv5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.770260 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59388585-c2da-4111-ad61-aacdf15612aa" (UID: "59388585-c2da-4111-ad61-aacdf15612aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.776080 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-config-data" (OuterVolumeSpecName: "config-data") pod "59388585-c2da-4111-ad61-aacdf15612aa" (UID: "59388585-c2da-4111-ad61-aacdf15612aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.836427 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.836462 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59388585-c2da-4111-ad61-aacdf15612aa-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.836474 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srv5h\" (UniqueName: \"kubernetes.io/projected/59388585-c2da-4111-ad61-aacdf15612aa-kube-api-access-srv5h\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.836484 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:34 crc kubenswrapper[4817]: I0314 05:53:34.836492 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59388585-c2da-4111-ad61-aacdf15612aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.740421 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-67dfb54788-qqrtk"] Mar 14 05:53:35 crc kubenswrapper[4817]: E0314 05:53:35.741091 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59388585-c2da-4111-ad61-aacdf15612aa" containerName="placement-db-sync" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.741104 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="59388585-c2da-4111-ad61-aacdf15612aa" containerName="placement-db-sync" Mar 14 05:53:35 crc kubenswrapper[4817]: E0314 05:53:35.741117 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" containerName="barbican-db-sync" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.741123 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" containerName="barbican-db-sync" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.741288 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" containerName="barbican-db-sync" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.741304 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="59388585-c2da-4111-ad61-aacdf15612aa" containerName="placement-db-sync" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.742147 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.744176 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.746977 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.747287 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-t6qlq" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.750272 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.752426 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.752689 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-67dfb54788-qqrtk"] Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.845367 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-658f9b4fd7-k22b5"] Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.846831 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857024 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-config-data\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857163 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-public-tls-certs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857296 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctnc\" (UniqueName: \"kubernetes.io/projected/5aad3d14-3e24-460a-b6b3-9508031f76d6-kube-api-access-rctnc\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857353 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-combined-ca-bundle\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857545 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-internal-tls-certs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857581 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad3d14-3e24-460a-b6b3-9508031f76d6-logs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.857621 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-scripts\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.858776 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xjvwd" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.859124 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.859351 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.861770 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b7556b9f8-gtmkg"] Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.863543 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.873235 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.902979 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-658f9b4fd7-k22b5"] Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.928361 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b7556b9f8-gtmkg"] Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959393 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctnc\" (UniqueName: \"kubernetes.io/projected/5aad3d14-3e24-460a-b6b3-9508031f76d6-kube-api-access-rctnc\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959443 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-combined-ca-bundle\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959498 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpnk\" (UniqueName: \"kubernetes.io/projected/a737974d-6611-4a56-9bbb-27256380ae54-kube-api-access-8zpnk\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959524 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-combined-ca-bundle\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959553 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-config-data-custom\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959576 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-internal-tls-certs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959596 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad3d14-3e24-460a-b6b3-9508031f76d6-logs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959615 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959632 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-scripts\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959663 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-config-data\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959695 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqxpx\" (UniqueName: \"kubernetes.io/projected/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-kube-api-access-mqxpx\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959713 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a737974d-6611-4a56-9bbb-27256380ae54-logs\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959730 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-config-data\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959756 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-config-data\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959774 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-logs\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959855 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-config-data-custom\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.959879 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-public-tls-certs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.962129 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aad3d14-3e24-460a-b6b3-9508031f76d6-logs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.975336 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-public-tls-certs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.975748 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-config-data\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.975882 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-scripts\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.978115 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-combined-ca-bundle\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.982181 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-5xpnr"] Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.987662 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aad3d14-3e24-460a-b6b3-9508031f76d6-internal-tls-certs\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.988373 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.996978 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctnc\" (UniqueName: \"kubernetes.io/projected/5aad3d14-3e24-460a-b6b3-9508031f76d6-kube-api-access-rctnc\") pod \"placement-67dfb54788-qqrtk\" (UID: \"5aad3d14-3e24-460a-b6b3-9508031f76d6\") " pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:35 crc kubenswrapper[4817]: I0314 05:53:35.998433 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-5xpnr"] Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061267 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-config-data\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061593 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqxpx\" (UniqueName: \"kubernetes.io/projected/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-kube-api-access-mqxpx\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061613 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a737974d-6611-4a56-9bbb-27256380ae54-logs\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061628 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-config-data\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061657 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrmb\" (UniqueName: \"kubernetes.io/projected/4f526d8c-6e78-4c4a-9528-88006e41d2d7-kube-api-access-bgrmb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061676 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-logs\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061701 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-config-data-custom\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061731 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061777 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zpnk\" (UniqueName: \"kubernetes.io/projected/a737974d-6611-4a56-9bbb-27256380ae54-kube-api-access-8zpnk\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061795 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-combined-ca-bundle\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061813 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061838 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-config-data-custom\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061856 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-config\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061879 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-dns-svc\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.061926 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.065333 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a737974d-6611-4a56-9bbb-27256380ae54-logs\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.066921 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.068037 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-config-data-custom\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.068655 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-logs\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.069878 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-config-data\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.070250 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.076385 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a737974d-6611-4a56-9bbb-27256380ae54-combined-ca-bundle\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.082842 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-config-data-custom\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.086458 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqxpx\" (UniqueName: \"kubernetes.io/projected/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-kube-api-access-mqxpx\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.087345 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0-config-data\") pod \"barbican-keystone-listener-6b7556b9f8-gtmkg\" (UID: \"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0\") " pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.090846 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zpnk\" (UniqueName: \"kubernetes.io/projected/a737974d-6611-4a56-9bbb-27256380ae54-kube-api-access-8zpnk\") pod \"barbican-worker-658f9b4fd7-k22b5\" (UID: \"a737974d-6611-4a56-9bbb-27256380ae54\") " pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.163588 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrmb\" (UniqueName: \"kubernetes.io/projected/4f526d8c-6e78-4c4a-9528-88006e41d2d7-kube-api-access-bgrmb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.163657 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.163706 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.163731 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-config\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.163757 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-dns-svc\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.164705 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-dns-svc\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.171934 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d5789d498-qs55q"] Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.173223 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.177294 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.179276 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-config\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.179928 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.180885 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.195121 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d5789d498-qs55q"] Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.195494 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-658f9b4fd7-k22b5" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.197324 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.232779 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrmb\" (UniqueName: \"kubernetes.io/projected/4f526d8c-6e78-4c4a-9528-88006e41d2d7-kube-api-access-bgrmb\") pod \"dnsmasq-dns-7f46f79845-5xpnr\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.266910 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-combined-ca-bundle\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.266959 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.267227 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6921187e-5058-45ef-9ba2-13a205560c11-logs\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.267608 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data-custom\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.267707 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lplns\" (UniqueName: \"kubernetes.io/projected/6921187e-5058-45ef-9ba2-13a205560c11-kube-api-access-lplns\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.369486 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6921187e-5058-45ef-9ba2-13a205560c11-logs\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.369578 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data-custom\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.369605 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lplns\" (UniqueName: \"kubernetes.io/projected/6921187e-5058-45ef-9ba2-13a205560c11-kube-api-access-lplns\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.369653 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-combined-ca-bundle\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.369671 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.370035 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6921187e-5058-45ef-9ba2-13a205560c11-logs\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.376023 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.378394 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data-custom\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.381577 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-combined-ca-bundle\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.388284 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lplns\" (UniqueName: \"kubernetes.io/projected/6921187e-5058-45ef-9ba2-13a205560c11-kube-api-access-lplns\") pod \"barbican-api-5d5789d498-qs55q\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.498084 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:36 crc kubenswrapper[4817]: I0314 05:53:36.515773 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:37 crc kubenswrapper[4817]: I0314 05:53:37.620312 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerStarted","Data":"33bfd2c1402ff63d9347e227b5c718a4cf94008370ece5a64fc69cb8724d174e"} Mar 14 05:53:37 crc kubenswrapper[4817]: I0314 05:53:37.670506 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d5789d498-qs55q"] Mar 14 05:53:37 crc kubenswrapper[4817]: I0314 05:53:37.841246 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-67dfb54788-qqrtk"] Mar 14 05:53:37 crc kubenswrapper[4817]: I0314 05:53:37.873638 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-658f9b4fd7-k22b5"] Mar 14 05:53:37 crc kubenswrapper[4817]: W0314 05:53:37.898381 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda737974d_6611_4a56_9bbb_27256380ae54.slice/crio-422d6e68946c2f6090728493a060afbe997679020f5a6afe83d9782e5200dd96 WatchSource:0}: Error finding container 422d6e68946c2f6090728493a060afbe997679020f5a6afe83d9782e5200dd96: Status 404 returned error can't find the container with id 422d6e68946c2f6090728493a060afbe997679020f5a6afe83d9782e5200dd96 Mar 14 05:53:37 crc kubenswrapper[4817]: W0314 05:53:37.909600 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aad3d14_3e24_460a_b6b3_9508031f76d6.slice/crio-6a3a11407b59f853477fea34cf45d2eca042c580c233943f65629bba60d4b58d WatchSource:0}: Error finding container 6a3a11407b59f853477fea34cf45d2eca042c580c233943f65629bba60d4b58d: Status 404 returned error can't find the container with id 6a3a11407b59f853477fea34cf45d2eca042c580c233943f65629bba60d4b58d Mar 14 05:53:37 crc kubenswrapper[4817]: I0314 05:53:37.983311 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b7556b9f8-gtmkg"] Mar 14 05:53:37 crc kubenswrapper[4817]: W0314 05:53:37.985556 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f526d8c_6e78_4c4a_9528_88006e41d2d7.slice/crio-d7a347e9112649b2c7cf48820478cefc797131405ca52a47d004ee5b83da341f WatchSource:0}: Error finding container d7a347e9112649b2c7cf48820478cefc797131405ca52a47d004ee5b83da341f: Status 404 returned error can't find the container with id d7a347e9112649b2c7cf48820478cefc797131405ca52a47d004ee5b83da341f Mar 14 05:53:37 crc kubenswrapper[4817]: I0314 05:53:37.990720 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-5xpnr"] Mar 14 05:53:37 crc kubenswrapper[4817]: W0314 05:53:37.993384 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc8e6f7b_b6a9_40e1_b71d_e61e17c72ee0.slice/crio-6b4af462134e6f3d9dc7ad22aa59488b9c5e4665379e547ff04d47e47316d865 WatchSource:0}: Error finding container 6b4af462134e6f3d9dc7ad22aa59488b9c5e4665379e547ff04d47e47316d865: Status 404 returned error can't find the container with id 6b4af462134e6f3d9dc7ad22aa59488b9c5e4665379e547ff04d47e47316d865 Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.566270 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.566929 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.634759 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-565cfb5466-k8v6z"] Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.636667 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.641772 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.642055 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.650309 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5789d498-qs55q" event={"ID":"6921187e-5058-45ef-9ba2-13a205560c11","Type":"ContainerStarted","Data":"e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.650855 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5789d498-qs55q" event={"ID":"6921187e-5058-45ef-9ba2-13a205560c11","Type":"ContainerStarted","Data":"abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.650880 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5789d498-qs55q" event={"ID":"6921187e-5058-45ef-9ba2-13a205560c11","Type":"ContainerStarted","Data":"020ef7df48ea9086d0cd7f2d14e31a4dba8f43ad3081dce8d25039320e08a8f7"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.650917 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.651086 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.653852 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dfb54788-qqrtk" event={"ID":"5aad3d14-3e24-460a-b6b3-9508031f76d6","Type":"ContainerStarted","Data":"b4552fe1e56821c88f0859a1b3452941bae3850158d182353a2e346e5fa0d300"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.653886 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dfb54788-qqrtk" event={"ID":"5aad3d14-3e24-460a-b6b3-9508031f76d6","Type":"ContainerStarted","Data":"c35fcc2b239bbd68eb120e2db987fc9cf38fb6f204899c89e0125ca8749325ae"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.653918 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67dfb54788-qqrtk" event={"ID":"5aad3d14-3e24-460a-b6b3-9508031f76d6","Type":"ContainerStarted","Data":"6a3a11407b59f853477fea34cf45d2eca042c580c233943f65629bba60d4b58d"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.654275 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.654331 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.659407 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-565cfb5466-k8v6z"] Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.685813 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" event={"ID":"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0","Type":"ContainerStarted","Data":"6b4af462134e6f3d9dc7ad22aa59488b9c5e4665379e547ff04d47e47316d865"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.690755 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z7ltl" event={"ID":"a0e16259-a87f-4bb8-8fa1-5ee63129e195","Type":"ContainerStarted","Data":"e2230960fc959ff8b0ba1e5cdf2736f3d66792ea0a205510deda2bd1dfc13e8a"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.700551 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d5789d498-qs55q" podStartSLOduration=2.7005300070000002 podStartE2EDuration="2.700530007s" podCreationTimestamp="2026-03-14 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:38.698399566 +0000 UTC m=+1272.736660312" watchObservedRunningTime="2026-03-14 05:53:38.700530007 +0000 UTC m=+1272.738790753" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.708254 4817 generic.go:334] "Generic (PLEG): container finished" podID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerID="520b4a900e8db44a5b1cfb60e5017326cdadfeb902e37e5f41afa9955b4be6f6" exitCode=0 Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.708351 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" event={"ID":"4f526d8c-6e78-4c4a-9528-88006e41d2d7","Type":"ContainerDied","Data":"520b4a900e8db44a5b1cfb60e5017326cdadfeb902e37e5f41afa9955b4be6f6"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.708378 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" event={"ID":"4f526d8c-6e78-4c4a-9528-88006e41d2d7","Type":"ContainerStarted","Data":"d7a347e9112649b2c7cf48820478cefc797131405ca52a47d004ee5b83da341f"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.712821 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-central-agent" containerID="cri-o://abf84e64294f172cd9a384f27c6d4f82ed7e69baad354375bd6d27a8b8643846" gracePeriod=30 Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.712984 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-658f9b4fd7-k22b5" event={"ID":"a737974d-6611-4a56-9bbb-27256380ae54","Type":"ContainerStarted","Data":"422d6e68946c2f6090728493a060afbe997679020f5a6afe83d9782e5200dd96"} Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.713026 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.713064 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="proxy-httpd" containerID="cri-o://33bfd2c1402ff63d9347e227b5c718a4cf94008370ece5a64fc69cb8724d174e" gracePeriod=30 Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.713110 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="sg-core" containerID="cri-o://9065f790d542f8e8d4055f03a24a1c0b11fcd75f603fa58481507eadd44ee8dd" gracePeriod=30 Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.713148 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-notification-agent" containerID="cri-o://5a01047c670e96bc7795fcfea790e229df3f8280863775fc2abe539eceb99223" gracePeriod=30 Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.738562 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-67dfb54788-qqrtk" podStartSLOduration=3.73853816 podStartE2EDuration="3.73853816s" podCreationTimestamp="2026-03-14 05:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:38.734061292 +0000 UTC m=+1272.772322038" watchObservedRunningTime="2026-03-14 05:53:38.73853816 +0000 UTC m=+1272.776798906" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753331 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-internal-tls-certs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753392 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-config-data-custom\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753469 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-config-data\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753565 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-public-tls-certs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753600 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-combined-ca-bundle\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753679 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4wcv\" (UniqueName: \"kubernetes.io/projected/af06e777-9e2e-437e-a013-cd5e83735ac0-kube-api-access-s4wcv\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.753699 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af06e777-9e2e-437e-a013-cd5e83735ac0-logs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854712 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-internal-tls-certs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854765 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-config-data-custom\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854829 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-config-data\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854885 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-public-tls-certs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854919 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-combined-ca-bundle\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854961 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4wcv\" (UniqueName: \"kubernetes.io/projected/af06e777-9e2e-437e-a013-cd5e83735ac0-kube-api-access-s4wcv\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.854979 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af06e777-9e2e-437e-a013-cd5e83735ac0-logs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.855349 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af06e777-9e2e-437e-a013-cd5e83735ac0-logs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.861776 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-config-data\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.868468 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-internal-tls-certs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.876376 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-config-data-custom\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.890593 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-public-tls-certs\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.891234 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af06e777-9e2e-437e-a013-cd5e83735ac0-combined-ca-bundle\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.894563 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4wcv\" (UniqueName: \"kubernetes.io/projected/af06e777-9e2e-437e-a013-cd5e83735ac0-kube-api-access-s4wcv\") pod \"barbican-api-565cfb5466-k8v6z\" (UID: \"af06e777-9e2e-437e-a013-cd5e83735ac0\") " pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:38 crc kubenswrapper[4817]: I0314 05:53:38.962566 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.120130 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.260943542 podStartE2EDuration="58.120100866s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="2026-03-14 05:52:43.42645765 +0000 UTC m=+1217.464718406" lastFinishedPulling="2026-03-14 05:53:37.285614984 +0000 UTC m=+1271.323875730" observedRunningTime="2026-03-14 05:53:40.093347057 +0000 UTC m=+1274.131607813" watchObservedRunningTime="2026-03-14 05:53:40.120100866 +0000 UTC m=+1274.158361612" Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.157937 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-z7ltl" podStartSLOduration=4.605743126 podStartE2EDuration="58.157904903s" podCreationTimestamp="2026-03-14 05:52:42 +0000 UTC" firstStartedPulling="2026-03-14 05:52:43.732063077 +0000 UTC m=+1217.770323823" lastFinishedPulling="2026-03-14 05:53:37.284224854 +0000 UTC m=+1271.322485600" observedRunningTime="2026-03-14 05:53:40.127370685 +0000 UTC m=+1274.165631431" watchObservedRunningTime="2026-03-14 05:53:40.157904903 +0000 UTC m=+1274.196165649" Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.657089 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-565cfb5466-k8v6z"] Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.740938 4817 generic.go:334] "Generic (PLEG): container finished" podID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerID="33bfd2c1402ff63d9347e227b5c718a4cf94008370ece5a64fc69cb8724d174e" exitCode=0 Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.740981 4817 generic.go:334] "Generic (PLEG): container finished" podID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerID="9065f790d542f8e8d4055f03a24a1c0b11fcd75f603fa58481507eadd44ee8dd" exitCode=2 Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.748124 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerDied","Data":"33bfd2c1402ff63d9347e227b5c718a4cf94008370ece5a64fc69cb8724d174e"} Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.748181 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerDied","Data":"9065f790d542f8e8d4055f03a24a1c0b11fcd75f603fa58481507eadd44ee8dd"} Mar 14 05:53:40 crc kubenswrapper[4817]: I0314 05:53:40.748194 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-565cfb5466-k8v6z" event={"ID":"af06e777-9e2e-437e-a013-cd5e83735ac0","Type":"ContainerStarted","Data":"f2b305deceb5a8b95e88f52e8bf570553f7d1505362f21ed5457d7736c800b63"} Mar 14 05:53:41 crc kubenswrapper[4817]: I0314 05:53:41.760848 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" event={"ID":"4f526d8c-6e78-4c4a-9528-88006e41d2d7","Type":"ContainerStarted","Data":"b9df2c54dba06e7ad1e2e1394b10ef5e298f1a6234cc69f78a627bcdecd055d8"} Mar 14 05:53:41 crc kubenswrapper[4817]: I0314 05:53:41.764583 4817 generic.go:334] "Generic (PLEG): container finished" podID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerID="abf84e64294f172cd9a384f27c6d4f82ed7e69baad354375bd6d27a8b8643846" exitCode=0 Mar 14 05:53:41 crc kubenswrapper[4817]: I0314 05:53:41.764641 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerDied","Data":"abf84e64294f172cd9a384f27c6d4f82ed7e69baad354375bd6d27a8b8643846"} Mar 14 05:53:42 crc kubenswrapper[4817]: I0314 05:53:42.788878 4817 generic.go:334] "Generic (PLEG): container finished" podID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerID="5a01047c670e96bc7795fcfea790e229df3f8280863775fc2abe539eceb99223" exitCode=0 Mar 14 05:53:42 crc kubenswrapper[4817]: I0314 05:53:42.788962 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerDied","Data":"5a01047c670e96bc7795fcfea790e229df3f8280863775fc2abe539eceb99223"} Mar 14 05:53:42 crc kubenswrapper[4817]: I0314 05:53:42.796796 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-565cfb5466-k8v6z" event={"ID":"af06e777-9e2e-437e-a013-cd5e83735ac0","Type":"ContainerStarted","Data":"e997906045412318e60fcc5f076402568c76dc7b308d34a698bf3cb23e3f5d9e"} Mar 14 05:53:42 crc kubenswrapper[4817]: I0314 05:53:42.797005 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:42 crc kubenswrapper[4817]: I0314 05:53:42.839306 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" podStartSLOduration=7.839276783 podStartE2EDuration="7.839276783s" podCreationTimestamp="2026-03-14 05:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:42.829646876 +0000 UTC m=+1276.867907622" watchObservedRunningTime="2026-03-14 05:53:42.839276783 +0000 UTC m=+1276.877537529" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.007587 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052258 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-run-httpd\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052388 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-sg-core-conf-yaml\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052418 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb48g\" (UniqueName: \"kubernetes.io/projected/55cdf8b8-c7aa-40df-b968-24656d19c55c-kube-api-access-mb48g\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052463 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-log-httpd\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052585 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-scripts\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052740 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-combined-ca-bundle\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052827 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-config-data\") pod \"55cdf8b8-c7aa-40df-b968-24656d19c55c\" (UID: \"55cdf8b8-c7aa-40df-b968-24656d19c55c\") " Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.052964 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.053373 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.053375 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.058224 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cdf8b8-c7aa-40df-b968-24656d19c55c-kube-api-access-mb48g" (OuterVolumeSpecName: "kube-api-access-mb48g") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "kube-api-access-mb48g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.060699 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-scripts" (OuterVolumeSpecName: "scripts") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.086487 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.155858 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.155921 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb48g\" (UniqueName: \"kubernetes.io/projected/55cdf8b8-c7aa-40df-b968-24656d19c55c-kube-api-access-mb48g\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.155939 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/55cdf8b8-c7aa-40df-b968-24656d19c55c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.155951 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.177100 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.220076 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-config-data" (OuterVolumeSpecName: "config-data") pod "55cdf8b8-c7aa-40df-b968-24656d19c55c" (UID: "55cdf8b8-c7aa-40df-b968-24656d19c55c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.257930 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.257968 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55cdf8b8-c7aa-40df-b968-24656d19c55c-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.821560 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-565cfb5466-k8v6z" event={"ID":"af06e777-9e2e-437e-a013-cd5e83735ac0","Type":"ContainerStarted","Data":"ed6a0be4a106bfb5cbb984f0090be09bdb0255c7c909027ad3253c0df777d027"} Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.822336 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.822433 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.836510 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" event={"ID":"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0","Type":"ContainerStarted","Data":"0b6272757379f7491d332298d66862e04ca4b6c6aae2b0b95c6c05576a98cb27"} Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.836651 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" event={"ID":"cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0","Type":"ContainerStarted","Data":"ecc226a6e12f0c8783ec37e7f4ad281b3b0a11f850e513e892cd4d2832f72679"} Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.839639 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-658f9b4fd7-k22b5" event={"ID":"a737974d-6611-4a56-9bbb-27256380ae54","Type":"ContainerStarted","Data":"702b4fb07a396ed9cc4ea8c030dc1d5a25946eae9eaacb94877cce5444e79c56"} Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.839734 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-658f9b4fd7-k22b5" event={"ID":"a737974d-6611-4a56-9bbb-27256380ae54","Type":"ContainerStarted","Data":"0632a26d2ffca7824013a5a00c4325229fbb3d4bbfd976e8510d05b8d67ef85a"} Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.850497 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.854473 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"55cdf8b8-c7aa-40df-b968-24656d19c55c","Type":"ContainerDied","Data":"3d54dd6ef7a5844273becdfbe642c09456e0c464d950a5f7235ffa1de13d806b"} Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.854770 4817 scope.go:117] "RemoveContainer" containerID="33bfd2c1402ff63d9347e227b5c718a4cf94008370ece5a64fc69cb8724d174e" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.863810 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-565cfb5466-k8v6z" podStartSLOduration=5.863772171 podStartE2EDuration="5.863772171s" podCreationTimestamp="2026-03-14 05:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:43.858310714 +0000 UTC m=+1277.896571470" watchObservedRunningTime="2026-03-14 05:53:43.863772171 +0000 UTC m=+1277.902032917" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.889068 4817 scope.go:117] "RemoveContainer" containerID="9065f790d542f8e8d4055f03a24a1c0b11fcd75f603fa58481507eadd44ee8dd" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.901415 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-658f9b4fd7-k22b5" podStartSLOduration=4.235412288 podStartE2EDuration="8.901379383s" podCreationTimestamp="2026-03-14 05:53:35 +0000 UTC" firstStartedPulling="2026-03-14 05:53:37.905200799 +0000 UTC m=+1271.943461545" lastFinishedPulling="2026-03-14 05:53:42.571167894 +0000 UTC m=+1276.609428640" observedRunningTime="2026-03-14 05:53:43.890334445 +0000 UTC m=+1277.928595191" watchObservedRunningTime="2026-03-14 05:53:43.901379383 +0000 UTC m=+1277.939640129" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.913675 4817 scope.go:117] "RemoveContainer" containerID="5a01047c670e96bc7795fcfea790e229df3f8280863775fc2abe539eceb99223" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.931118 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b7556b9f8-gtmkg" podStartSLOduration=4.361491473 podStartE2EDuration="8.931082667s" podCreationTimestamp="2026-03-14 05:53:35 +0000 UTC" firstStartedPulling="2026-03-14 05:53:38.005786601 +0000 UTC m=+1272.044047347" lastFinishedPulling="2026-03-14 05:53:42.575377795 +0000 UTC m=+1276.613638541" observedRunningTime="2026-03-14 05:53:43.92040985 +0000 UTC m=+1277.958670586" watchObservedRunningTime="2026-03-14 05:53:43.931082667 +0000 UTC m=+1277.969343413" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.969510 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.984046 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.993317 4817 scope.go:117] "RemoveContainer" containerID="abf84e64294f172cd9a384f27c6d4f82ed7e69baad354375bd6d27a8b8643846" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.997054 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:53:43 crc kubenswrapper[4817]: E0314 05:53:43.997618 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="sg-core" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.997659 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="sg-core" Mar 14 05:53:43 crc kubenswrapper[4817]: E0314 05:53:43.997684 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-central-agent" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.997693 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-central-agent" Mar 14 05:53:43 crc kubenswrapper[4817]: E0314 05:53:43.997706 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-notification-agent" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.997713 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-notification-agent" Mar 14 05:53:43 crc kubenswrapper[4817]: E0314 05:53:43.997760 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="proxy-httpd" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.997770 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="proxy-httpd" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.998024 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-notification-agent" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.998047 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="sg-core" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.998066 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="ceilometer-central-agent" Mar 14 05:53:43 crc kubenswrapper[4817]: I0314 05:53:43.998090 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" containerName="proxy-httpd" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.000315 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.008006 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.008259 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.010520 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.181521 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-scripts\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.181607 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.181647 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gsv\" (UniqueName: \"kubernetes.io/projected/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-kube-api-access-h4gsv\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.182703 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-config-data\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.182758 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.182789 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-log-httpd\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.182846 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-run-httpd\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285073 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285144 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gsv\" (UniqueName: \"kubernetes.io/projected/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-kube-api-access-h4gsv\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285242 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-config-data\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285271 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285292 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-log-httpd\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285327 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-run-httpd\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.285358 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-scripts\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.286683 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-log-httpd\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.286785 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-run-httpd\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.293436 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-scripts\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.294750 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-config-data\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.296208 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.302835 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.315226 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gsv\" (UniqueName: \"kubernetes.io/projected/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-kube-api-access-h4gsv\") pod \"ceilometer-0\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.320769 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.748003 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55cdf8b8-c7aa-40df-b968-24656d19c55c" path="/var/lib/kubelet/pods/55cdf8b8-c7aa-40df-b968-24656d19c55c/volumes" Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.836306 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:53:44 crc kubenswrapper[4817]: W0314 05:53:44.841242 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3d2f3d0_d087_4aa2_86bb_aa46f576f23e.slice/crio-4b50af39f620b7107ab32687e3aef828417ddea6ddfd614ce859e566a4989ef2 WatchSource:0}: Error finding container 4b50af39f620b7107ab32687e3aef828417ddea6ddfd614ce859e566a4989ef2: Status 404 returned error can't find the container with id 4b50af39f620b7107ab32687e3aef828417ddea6ddfd614ce859e566a4989ef2 Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.859340 4817 generic.go:334] "Generic (PLEG): container finished" podID="c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" containerID="bbe42d57d935818f316d61b98ac093be210893b0c901fafc039d564e1e065f6d" exitCode=0 Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.859427 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbf7q" event={"ID":"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3","Type":"ContainerDied","Data":"bbe42d57d935818f316d61b98ac093be210893b0c901fafc039d564e1e065f6d"} Mar 14 05:53:44 crc kubenswrapper[4817]: I0314 05:53:44.861281 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerStarted","Data":"4b50af39f620b7107ab32687e3aef828417ddea6ddfd614ce859e566a4989ef2"} Mar 14 05:53:45 crc kubenswrapper[4817]: I0314 05:53:45.876616 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerStarted","Data":"f5d8a669e8609ac990c4e24ffe65c714c013ce07fb450a5d9e827508fc1da827"} Mar 14 05:53:45 crc kubenswrapper[4817]: I0314 05:53:45.891565 4817 generic.go:334] "Generic (PLEG): container finished" podID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" containerID="e2230960fc959ff8b0ba1e5cdf2736f3d66792ea0a205510deda2bd1dfc13e8a" exitCode=0 Mar 14 05:53:45 crc kubenswrapper[4817]: I0314 05:53:45.891687 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z7ltl" event={"ID":"a0e16259-a87f-4bb8-8fa1-5ee63129e195","Type":"ContainerDied","Data":"e2230960fc959ff8b0ba1e5cdf2736f3d66792ea0a205510deda2bd1dfc13e8a"} Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.326489 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.348908 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-combined-ca-bundle\") pod \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.349032 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-config\") pod \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.349063 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55l57\" (UniqueName: \"kubernetes.io/projected/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-kube-api-access-55l57\") pod \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\" (UID: \"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3\") " Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.356359 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-kube-api-access-55l57" (OuterVolumeSpecName: "kube-api-access-55l57") pod "c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" (UID: "c2d5019d-817c-4cc1-b73d-7e32a6cb97b3"). InnerVolumeSpecName "kube-api-access-55l57". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.409353 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" (UID: "c2d5019d-817c-4cc1-b73d-7e32a6cb97b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.425266 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-config" (OuterVolumeSpecName: "config") pod "c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" (UID: "c2d5019d-817c-4cc1-b73d-7e32a6cb97b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.451776 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.451822 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.451837 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55l57\" (UniqueName: \"kubernetes.io/projected/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3-kube-api-access-55l57\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.500176 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.569494 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8"] Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.569942 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerName="dnsmasq-dns" containerID="cri-o://580461da1329fd7332858cbec4d6656f5677926b7acfea0a86d8e109bc17c2f8" gracePeriod=10 Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.926410 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbf7q" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.927009 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbf7q" event={"ID":"c2d5019d-817c-4cc1-b73d-7e32a6cb97b3","Type":"ContainerDied","Data":"cc08a400f5fe4531b9fddb129b266a071eb5de9a27070635bf9ff33133de8f75"} Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.927443 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc08a400f5fe4531b9fddb129b266a071eb5de9a27070635bf9ff33133de8f75" Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.945156 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerStarted","Data":"bb8fb08a199b85e5cf72c70d43f78c977e906fdb2a5ccef9c6d8c7ee45eb6d5c"} Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.966189 4817 generic.go:334] "Generic (PLEG): container finished" podID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerID="580461da1329fd7332858cbec4d6656f5677926b7acfea0a86d8e109bc17c2f8" exitCode=0 Mar 14 05:53:46 crc kubenswrapper[4817]: I0314 05:53:46.966448 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" event={"ID":"4db56aa3-7556-4a5f-89db-662a8acf5948","Type":"ContainerDied","Data":"580461da1329fd7332858cbec4d6656f5677926b7acfea0a86d8e109bc17c2f8"} Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.357611 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-mmszt"] Mar 14 05:53:47 crc kubenswrapper[4817]: E0314 05:53:47.358300 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" containerName="neutron-db-sync" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.358331 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" containerName="neutron-db-sync" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.358558 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" containerName="neutron-db-sync" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.360051 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.374843 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-mmszt"] Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.384462 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.384546 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-dns-svc\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.384573 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpmfl\" (UniqueName: \"kubernetes.io/projected/202dd51e-5f3b-4057-aa14-2b24c656b08c-kube-api-access-jpmfl\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.384672 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-config\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.388327 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.493256 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.493369 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-dns-svc\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.493394 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpmfl\" (UniqueName: \"kubernetes.io/projected/202dd51e-5f3b-4057-aa14-2b24c656b08c-kube-api-access-jpmfl\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.494845 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-dns-svc\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.495029 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-config\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.495131 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.495216 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.495768 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.496719 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-config\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.538307 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpmfl\" (UniqueName: \"kubernetes.io/projected/202dd51e-5f3b-4057-aa14-2b24c656b08c-kube-api-access-jpmfl\") pod \"dnsmasq-dns-869f779d85-mmszt\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.606679 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.618046 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cffdb9bc8-nfp5r"] Mar 14 05:53:47 crc kubenswrapper[4817]: E0314 05:53:47.618589 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerName="init" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.618614 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerName="init" Mar 14 05:53:47 crc kubenswrapper[4817]: E0314 05:53:47.618630 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerName="dnsmasq-dns" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.618638 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerName="dnsmasq-dns" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.618827 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" containerName="dnsmasq-dns" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.621244 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.627585 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9rpn5" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.627776 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.627814 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.627847 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.636222 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cffdb9bc8-nfp5r"] Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.685146 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.729946 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.800861 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-sb\") pod \"4db56aa3-7556-4a5f-89db-662a8acf5948\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801006 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-dns-svc\") pod \"4db56aa3-7556-4a5f-89db-662a8acf5948\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801039 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmsqd\" (UniqueName: \"kubernetes.io/projected/4db56aa3-7556-4a5f-89db-662a8acf5948-kube-api-access-dmsqd\") pod \"4db56aa3-7556-4a5f-89db-662a8acf5948\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801163 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-nb\") pod \"4db56aa3-7556-4a5f-89db-662a8acf5948\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801301 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-config\") pod \"4db56aa3-7556-4a5f-89db-662a8acf5948\" (UID: \"4db56aa3-7556-4a5f-89db-662a8acf5948\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801617 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-config\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801666 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-ovndb-tls-certs\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.801713 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-httpd-config\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.802078 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c4ww\" (UniqueName: \"kubernetes.io/projected/8d851269-5dc1-486c-9914-f8de2088dfc5-kube-api-access-4c4ww\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.802107 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-combined-ca-bundle\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.819860 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db56aa3-7556-4a5f-89db-662a8acf5948-kube-api-access-dmsqd" (OuterVolumeSpecName: "kube-api-access-dmsqd") pod "4db56aa3-7556-4a5f-89db-662a8acf5948" (UID: "4db56aa3-7556-4a5f-89db-662a8acf5948"). InnerVolumeSpecName "kube-api-access-dmsqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.863325 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4db56aa3-7556-4a5f-89db-662a8acf5948" (UID: "4db56aa3-7556-4a5f-89db-662a8acf5948"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.903290 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-scripts\") pod \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.903958 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-config-data\") pod \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.904075 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0e16259-a87f-4bb8-8fa1-5ee63129e195-etc-machine-id\") pod \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.904199 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-combined-ca-bundle\") pod \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.904319 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-db-sync-config-data\") pod \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.904630 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69s7m\" (UniqueName: \"kubernetes.io/projected/a0e16259-a87f-4bb8-8fa1-5ee63129e195-kube-api-access-69s7m\") pod \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\" (UID: \"a0e16259-a87f-4bb8-8fa1-5ee63129e195\") " Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.905093 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-config\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.909456 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-ovndb-tls-certs\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.913264 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-httpd-config\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.915804 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c4ww\" (UniqueName: \"kubernetes.io/projected/8d851269-5dc1-486c-9914-f8de2088dfc5-kube-api-access-4c4ww\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.917122 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-combined-ca-bundle\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.918563 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmsqd\" (UniqueName: \"kubernetes.io/projected/4db56aa3-7556-4a5f-89db-662a8acf5948-kube-api-access-dmsqd\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.924273 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.910042 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0e16259-a87f-4bb8-8fa1-5ee63129e195-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a0e16259-a87f-4bb8-8fa1-5ee63129e195" (UID: "a0e16259-a87f-4bb8-8fa1-5ee63129e195"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.912659 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-scripts" (OuterVolumeSpecName: "scripts") pod "a0e16259-a87f-4bb8-8fa1-5ee63129e195" (UID: "a0e16259-a87f-4bb8-8fa1-5ee63129e195"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.923925 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a0e16259-a87f-4bb8-8fa1-5ee63129e195" (UID: "a0e16259-a87f-4bb8-8fa1-5ee63129e195"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.917511 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-ovndb-tls-certs\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.945692 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e16259-a87f-4bb8-8fa1-5ee63129e195-kube-api-access-69s7m" (OuterVolumeSpecName: "kube-api-access-69s7m") pod "a0e16259-a87f-4bb8-8fa1-5ee63129e195" (UID: "a0e16259-a87f-4bb8-8fa1-5ee63129e195"). InnerVolumeSpecName "kube-api-access-69s7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.952041 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-config\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.952691 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-httpd-config\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.953052 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c4ww\" (UniqueName: \"kubernetes.io/projected/8d851269-5dc1-486c-9914-f8de2088dfc5-kube-api-access-4c4ww\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.959183 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-combined-ca-bundle\") pod \"neutron-cffdb9bc8-nfp5r\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:47 crc kubenswrapper[4817]: I0314 05:53:47.977713 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4db56aa3-7556-4a5f-89db-662a8acf5948" (UID: "4db56aa3-7556-4a5f-89db-662a8acf5948"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.013731 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.029729 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.029786 4817 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0e16259-a87f-4bb8-8fa1-5ee63129e195-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.029803 4817 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.029819 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.029834 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69s7m\" (UniqueName: \"kubernetes.io/projected/a0e16259-a87f-4bb8-8fa1-5ee63129e195-kube-api-access-69s7m\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.030819 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z7ltl" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.044377 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z7ltl" event={"ID":"a0e16259-a87f-4bb8-8fa1-5ee63129e195","Type":"ContainerDied","Data":"320e61058bcd1471869cd8780efe8c1ed5efad09b5cfab7bc73b4081b1d0eaa5"} Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.044519 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320e61058bcd1471869cd8780efe8c1ed5efad09b5cfab7bc73b4081b1d0eaa5" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.049211 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" event={"ID":"4db56aa3-7556-4a5f-89db-662a8acf5948","Type":"ContainerDied","Data":"4ccc3b7d1ecd7d55b8ae5c31a7bb63711204653c56eecb9f7a0d5d11b1001ca5"} Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.049302 4817 scope.go:117] "RemoveContainer" containerID="580461da1329fd7332858cbec4d6656f5677926b7acfea0a86d8e109bc17c2f8" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.049537 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.096974 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0e16259-a87f-4bb8-8fa1-5ee63129e195" (UID: "a0e16259-a87f-4bb8-8fa1-5ee63129e195"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.118744 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-config" (OuterVolumeSpecName: "config") pod "4db56aa3-7556-4a5f-89db-662a8acf5948" (UID: "4db56aa3-7556-4a5f-89db-662a8acf5948"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.134071 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.141053 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.141412 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4db56aa3-7556-4a5f-89db-662a8acf5948" (UID: "4db56aa3-7556-4a5f-89db-662a8acf5948"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.150758 4817 scope.go:117] "RemoveContainer" containerID="d4b99dfd9cb1ec78170452b9c2d88cba768dff71984563d847cf45d2fa8f06ad" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.185559 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-config-data" (OuterVolumeSpecName: "config-data") pod "a0e16259-a87f-4bb8-8fa1-5ee63129e195" (UID: "a0e16259-a87f-4bb8-8fa1-5ee63129e195"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.246562 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4db56aa3-7556-4a5f-89db-662a8acf5948-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.246988 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e16259-a87f-4bb8-8fa1-5ee63129e195-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.285421 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:53:48 crc kubenswrapper[4817]: E0314 05:53:48.286106 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" containerName="cinder-db-sync" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.286122 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" containerName="cinder-db-sync" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.286398 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" containerName="cinder-db-sync" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.287540 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.298942 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.307337 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.420178 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-mmszt"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.497433 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-mmszt"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.510051 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.510106 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcdp\" (UniqueName: \"kubernetes.io/projected/d9de96af-67da-4521-9f6f-f6dfadb3b270-kube-api-access-lvcdp\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.510159 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.510177 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.510192 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9de96af-67da-4521-9f6f-f6dfadb3b270-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.510265 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.545727 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-mqbx6"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.560179 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-mqbx6"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.560306 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612489 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612578 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612609 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcdp\" (UniqueName: \"kubernetes.io/projected/d9de96af-67da-4521-9f6f-f6dfadb3b270-kube-api-access-lvcdp\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612659 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612683 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612703 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9de96af-67da-4521-9f6f-f6dfadb3b270-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.612812 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9de96af-67da-4521-9f6f-f6dfadb3b270-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.621822 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-scripts\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.629920 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.634584 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.635405 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.656843 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.663157 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcdp\" (UniqueName: \"kubernetes.io/projected/d9de96af-67da-4521-9f6f-f6dfadb3b270-kube-api-access-lvcdp\") pod \"cinder-scheduler-0\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " pod="openstack/cinder-scheduler-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.687641 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.687792 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.697356 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.715153 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-config\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.715235 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.715262 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.715319 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfqfg\" (UniqueName: \"kubernetes.io/projected/75d14374-986c-4b2c-8d2c-aa97aaee29fe-kube-api-access-bfqfg\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:48 crc kubenswrapper[4817]: I0314 05:53:48.715378 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-dns-svc\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817097 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817513 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5hx\" (UniqueName: \"kubernetes.io/projected/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-kube-api-access-jz5hx\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817641 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-config\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817678 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-logs\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817705 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-scripts\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817727 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817747 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817771 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817842 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfqfg\" (UniqueName: \"kubernetes.io/projected/75d14374-986c-4b2c-8d2c-aa97aaee29fe-kube-api-access-bfqfg\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817920 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-dns-svc\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817947 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.817981 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.818749 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-config\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.819403 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.824469 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.825051 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-dns-svc\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.847652 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfqfg\" (UniqueName: \"kubernetes.io/projected/75d14374-986c-4b2c-8d2c-aa97aaee29fe-kube-api-access-bfqfg\") pod \"dnsmasq-dns-58db5546cc-mqbx6\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919269 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919333 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919379 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919409 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5hx\" (UniqueName: \"kubernetes.io/projected/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-kube-api-access-jz5hx\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919471 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-logs\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919518 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-scripts\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.919549 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.920048 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.921189 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-logs\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.924783 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.925684 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-scripts\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.929598 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.930720 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:48.948612 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5hx\" (UniqueName: \"kubernetes.io/projected/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-kube-api-access-jz5hx\") pod \"cinder-api-0\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.027160 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.095675 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-mmszt" event={"ID":"202dd51e-5f3b-4057-aa14-2b24c656b08c","Type":"ContainerStarted","Data":"3ef17cfc35cec3d53259bf5164d7511659c1610d62327c1d2a3bdfbb136fcbe8"} Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.122474 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.128974 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerStarted","Data":"1aea90580413b4215ff12e0314af7a1894f46dffc6ad184b3159f677a11526e2"} Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.129184 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.136581 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8"] Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.156654 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-xgmf8"] Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.659358 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cffdb9bc8-nfp5r"] Mar 14 05:53:49 crc kubenswrapper[4817]: I0314 05:53:49.836724 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.070233 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.097562 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-mqbx6"] Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.156366 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.247723 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9de96af-67da-4521-9f6f-f6dfadb3b270","Type":"ContainerStarted","Data":"7f1d1d7bd2587cd9d15ddabe57fa770f46fab72d4096651ce88a869f0b07fd5f"} Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.267832 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ccee9da-6ff0-4b19-a97b-8fc7071782d6","Type":"ContainerStarted","Data":"0bf4fd3cd1d0bcdb4eb1b112bd9c732a3e38c3017bad964b9cd52aa748a01e1f"} Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.312201 4817 generic.go:334] "Generic (PLEG): container finished" podID="202dd51e-5f3b-4057-aa14-2b24c656b08c" containerID="f86e9afd2805f5cb67ece4b7ad72ce6f536d490f85df2babd09f38acb58a5261" exitCode=0 Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.312306 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-mmszt" event={"ID":"202dd51e-5f3b-4057-aa14-2b24c656b08c","Type":"ContainerDied","Data":"f86e9afd2805f5cb67ece4b7ad72ce6f536d490f85df2babd09f38acb58a5261"} Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.323008 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" event={"ID":"75d14374-986c-4b2c-8d2c-aa97aaee29fe","Type":"ContainerStarted","Data":"97fdd5497f773f7e985bc108fa1a02413cf6f709e4477930bce8af8e71b7c702"} Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.343867 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cffdb9bc8-nfp5r" event={"ID":"8d851269-5dc1-486c-9914-f8de2088dfc5","Type":"ContainerStarted","Data":"6813fa994a03d06f9c90e9d2976f859fb8358f6e42e61538c91037d211e9b2f3"} Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.343958 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cffdb9bc8-nfp5r" event={"ID":"8d851269-5dc1-486c-9914-f8de2088dfc5","Type":"ContainerStarted","Data":"e24fa9206c713ad0666df6549d1772b72004b0acf4f9e9cf12dfadcea37894d4"} Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.557070 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.786236 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db56aa3-7556-4a5f-89db-662a8acf5948" path="/var/lib/kubelet/pods/4db56aa3-7556-4a5f-89db-662a8acf5948/volumes" Mar 14 05:53:50 crc kubenswrapper[4817]: I0314 05:53:50.937429 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.003936 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.126884 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-config\") pod \"202dd51e-5f3b-4057-aa14-2b24c656b08c\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.126961 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpmfl\" (UniqueName: \"kubernetes.io/projected/202dd51e-5f3b-4057-aa14-2b24c656b08c-kube-api-access-jpmfl\") pod \"202dd51e-5f3b-4057-aa14-2b24c656b08c\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.126982 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-sb\") pod \"202dd51e-5f3b-4057-aa14-2b24c656b08c\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.127033 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-dns-svc\") pod \"202dd51e-5f3b-4057-aa14-2b24c656b08c\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.127062 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-nb\") pod \"202dd51e-5f3b-4057-aa14-2b24c656b08c\" (UID: \"202dd51e-5f3b-4057-aa14-2b24c656b08c\") " Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.158339 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202dd51e-5f3b-4057-aa14-2b24c656b08c-kube-api-access-jpmfl" (OuterVolumeSpecName: "kube-api-access-jpmfl") pod "202dd51e-5f3b-4057-aa14-2b24c656b08c" (UID: "202dd51e-5f3b-4057-aa14-2b24c656b08c"). InnerVolumeSpecName "kube-api-access-jpmfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.171711 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-config" (OuterVolumeSpecName: "config") pod "202dd51e-5f3b-4057-aa14-2b24c656b08c" (UID: "202dd51e-5f3b-4057-aa14-2b24c656b08c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.186585 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "202dd51e-5f3b-4057-aa14-2b24c656b08c" (UID: "202dd51e-5f3b-4057-aa14-2b24c656b08c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.196645 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "202dd51e-5f3b-4057-aa14-2b24c656b08c" (UID: "202dd51e-5f3b-4057-aa14-2b24c656b08c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.204919 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "202dd51e-5f3b-4057-aa14-2b24c656b08c" (UID: "202dd51e-5f3b-4057-aa14-2b24c656b08c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.228974 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.229014 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpmfl\" (UniqueName: \"kubernetes.io/projected/202dd51e-5f3b-4057-aa14-2b24c656b08c-kube-api-access-jpmfl\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.229025 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.229037 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.229047 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/202dd51e-5f3b-4057-aa14-2b24c656b08c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.374731 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-mmszt" event={"ID":"202dd51e-5f3b-4057-aa14-2b24c656b08c","Type":"ContainerDied","Data":"3ef17cfc35cec3d53259bf5164d7511659c1610d62327c1d2a3bdfbb136fcbe8"} Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.374831 4817 scope.go:117] "RemoveContainer" containerID="f86e9afd2805f5cb67ece4b7ad72ce6f536d490f85df2babd09f38acb58a5261" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.375015 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-mmszt" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.419718 4817 generic.go:334] "Generic (PLEG): container finished" podID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerID="b39a6be9d8d4cd39df181e100d1f30125b368da7ffae2b7ff89e20b9ffc96d87" exitCode=0 Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.419815 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" event={"ID":"75d14374-986c-4b2c-8d2c-aa97aaee29fe","Type":"ContainerDied","Data":"b39a6be9d8d4cd39df181e100d1f30125b368da7ffae2b7ff89e20b9ffc96d87"} Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.444423 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cffdb9bc8-nfp5r" event={"ID":"8d851269-5dc1-486c-9914-f8de2088dfc5","Type":"ContainerStarted","Data":"2ccd3ec57d75d11d5b9b809695c8cad452f5db007222cf9faad62d5f4b827fdc"} Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.444750 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.595985 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-mmszt"] Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.637145 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-mmszt"] Mar 14 05:53:51 crc kubenswrapper[4817]: I0314 05:53:51.691404 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cffdb9bc8-nfp5r" podStartSLOduration=4.691367846 podStartE2EDuration="4.691367846s" podCreationTimestamp="2026-03-14 05:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:51.554375266 +0000 UTC m=+1285.592636022" watchObservedRunningTime="2026-03-14 05:53:51.691367846 +0000 UTC m=+1285.729628592" Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.461572 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9de96af-67da-4521-9f6f-f6dfadb3b270","Type":"ContainerStarted","Data":"2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8"} Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.467297 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ccee9da-6ff0-4b19-a97b-8fc7071782d6","Type":"ContainerStarted","Data":"472e0330ca44fea6162e9c93eaf714e74ace4febd3dd1f4a4c4a6848b166d1d8"} Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.481454 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" event={"ID":"75d14374-986c-4b2c-8d2c-aa97aaee29fe","Type":"ContainerStarted","Data":"fc697f9a3ed95fe6fa6525a50d84486d7042a15f553948493395945f58b02057"} Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.481791 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.497345 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerStarted","Data":"f9baa29b6ec9a9ef2b8912f51bde426314d5e03ab2aef9eb2db463f1f38255a3"} Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.497508 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.521352 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" podStartSLOduration=4.521318939 podStartE2EDuration="4.521318939s" podCreationTimestamp="2026-03-14 05:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:52.502320733 +0000 UTC m=+1286.540581479" watchObservedRunningTime="2026-03-14 05:53:52.521318939 +0000 UTC m=+1286.559579685" Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.544883 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.430371201 podStartE2EDuration="9.544853666s" podCreationTimestamp="2026-03-14 05:53:43 +0000 UTC" firstStartedPulling="2026-03-14 05:53:44.84374112 +0000 UTC m=+1278.882001866" lastFinishedPulling="2026-03-14 05:53:50.958223585 +0000 UTC m=+1284.996484331" observedRunningTime="2026-03-14 05:53:52.539158752 +0000 UTC m=+1286.577419498" watchObservedRunningTime="2026-03-14 05:53:52.544853666 +0000 UTC m=+1286.583114412" Mar 14 05:53:52 crc kubenswrapper[4817]: I0314 05:53:52.836550 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202dd51e-5f3b-4057-aa14-2b24c656b08c" path="/var/lib/kubelet/pods/202dd51e-5f3b-4057-aa14-2b24c656b08c/volumes" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.117528 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.130710 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-565cfb5466-k8v6z" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.245811 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d5789d498-qs55q"] Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.246151 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" containerID="cri-o://abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11" gracePeriod=30 Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.246818 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" containerID="cri-o://e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9" gracePeriod=30 Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.261764 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.261956 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.262075 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.262139 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": EOF" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.510646 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9de96af-67da-4521-9f6f-f6dfadb3b270","Type":"ContainerStarted","Data":"315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799"} Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.513636 4817 generic.go:334] "Generic (PLEG): container finished" podID="6921187e-5058-45ef-9ba2-13a205560c11" containerID="abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11" exitCode=143 Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.513728 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5789d498-qs55q" event={"ID":"6921187e-5058-45ef-9ba2-13a205560c11","Type":"ContainerDied","Data":"abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11"} Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.530298 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api-log" containerID="cri-o://472e0330ca44fea6162e9c93eaf714e74ace4febd3dd1f4a4c4a6848b166d1d8" gracePeriod=30 Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.530437 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api" containerID="cri-o://1729035a49e67beae40684181ace5f2f5a3c64facc3b297b3420aef969393d55" gracePeriod=30 Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.530496 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ccee9da-6ff0-4b19-a97b-8fc7071782d6","Type":"ContainerStarted","Data":"1729035a49e67beae40684181ace5f2f5a3c64facc3b297b3420aef969393d55"} Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.531039 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.548361 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.424965798 podStartE2EDuration="5.54832978s" podCreationTimestamp="2026-03-14 05:53:48 +0000 UTC" firstStartedPulling="2026-03-14 05:53:49.836457719 +0000 UTC m=+1283.874718465" lastFinishedPulling="2026-03-14 05:53:50.959821701 +0000 UTC m=+1284.998082447" observedRunningTime="2026-03-14 05:53:53.537709095 +0000 UTC m=+1287.575969851" watchObservedRunningTime="2026-03-14 05:53:53.54832978 +0000 UTC m=+1287.586590526" Mar 14 05:53:53 crc kubenswrapper[4817]: I0314 05:53:53.582741 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.582705858 podStartE2EDuration="5.582705858s" podCreationTimestamp="2026-03-14 05:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:53.57443007 +0000 UTC m=+1287.612690816" watchObservedRunningTime="2026-03-14 05:53:53.582705858 +0000 UTC m=+1287.620966604" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.027884 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.476972 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-549548697-x46rl"] Mar 14 05:53:54 crc kubenswrapper[4817]: E0314 05:53:54.477327 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202dd51e-5f3b-4057-aa14-2b24c656b08c" containerName="init" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.477344 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="202dd51e-5f3b-4057-aa14-2b24c656b08c" containerName="init" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.477523 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="202dd51e-5f3b-4057-aa14-2b24c656b08c" containerName="init" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.478419 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.480545 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.481860 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.496169 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-549548697-x46rl"] Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.530544 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-httpd-config\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.530781 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-public-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.530830 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ssl\" (UniqueName: \"kubernetes.io/projected/ea9ff5c7-bf16-488f-8289-cbc134c9416e-kube-api-access-v6ssl\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.531042 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-config\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.531124 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-internal-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.531330 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-combined-ca-bundle\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.531434 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-ovndb-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.542606 4817 generic.go:334] "Generic (PLEG): container finished" podID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerID="472e0330ca44fea6162e9c93eaf714e74ace4febd3dd1f4a4c4a6848b166d1d8" exitCode=143 Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.544076 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ccee9da-6ff0-4b19-a97b-8fc7071782d6","Type":"ContainerDied","Data":"472e0330ca44fea6162e9c93eaf714e74ace4febd3dd1f4a4c4a6848b166d1d8"} Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.633455 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-combined-ca-bundle\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.633566 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-ovndb-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.633674 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-httpd-config\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.633881 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-public-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.633997 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ssl\" (UniqueName: \"kubernetes.io/projected/ea9ff5c7-bf16-488f-8289-cbc134c9416e-kube-api-access-v6ssl\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.634047 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-config\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.634082 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-internal-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.652587 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-config\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.654737 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-internal-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.655780 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-httpd-config\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.664103 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-combined-ca-bundle\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.667419 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-public-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.667950 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ssl\" (UniqueName: \"kubernetes.io/projected/ea9ff5c7-bf16-488f-8289-cbc134c9416e-kube-api-access-v6ssl\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.671060 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea9ff5c7-bf16-488f-8289-cbc134c9416e-ovndb-tls-certs\") pod \"neutron-549548697-x46rl\" (UID: \"ea9ff5c7-bf16-488f-8289-cbc134c9416e\") " pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:54 crc kubenswrapper[4817]: I0314 05:53:54.801886 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:55 crc kubenswrapper[4817]: W0314 05:53:55.543130 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea9ff5c7_bf16_488f_8289_cbc134c9416e.slice/crio-fc08b2cafe495190fcb18709c3983606658027eaaf1554dcaab208448a1e747b WatchSource:0}: Error finding container fc08b2cafe495190fcb18709c3983606658027eaaf1554dcaab208448a1e747b: Status 404 returned error can't find the container with id fc08b2cafe495190fcb18709c3983606658027eaaf1554dcaab208448a1e747b Mar 14 05:53:55 crc kubenswrapper[4817]: I0314 05:53:55.543263 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-549548697-x46rl"] Mar 14 05:53:56 crc kubenswrapper[4817]: I0314 05:53:56.234871 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dc6f976cd-xr97w" Mar 14 05:53:56 crc kubenswrapper[4817]: I0314 05:53:56.568311 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549548697-x46rl" event={"ID":"ea9ff5c7-bf16-488f-8289-cbc134c9416e","Type":"ContainerStarted","Data":"d59f92d676fb0c59e4f1d471d694a9273a2022a3ac424bec750a29a5a243d108"} Mar 14 05:53:56 crc kubenswrapper[4817]: I0314 05:53:56.568395 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549548697-x46rl" event={"ID":"ea9ff5c7-bf16-488f-8289-cbc134c9416e","Type":"ContainerStarted","Data":"efa5c1b42ebe49633116ba5a911394fc32e259765148cc434c28525b5da1befb"} Mar 14 05:53:56 crc kubenswrapper[4817]: I0314 05:53:56.568431 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549548697-x46rl" event={"ID":"ea9ff5c7-bf16-488f-8289-cbc134c9416e","Type":"ContainerStarted","Data":"fc08b2cafe495190fcb18709c3983606658027eaaf1554dcaab208448a1e747b"} Mar 14 05:53:56 crc kubenswrapper[4817]: I0314 05:53:56.568598 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-549548697-x46rl" Mar 14 05:53:56 crc kubenswrapper[4817]: I0314 05:53:56.597126 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-549548697-x46rl" podStartSLOduration=2.597107044 podStartE2EDuration="2.597107044s" podCreationTimestamp="2026-03-14 05:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:53:56.595100557 +0000 UTC m=+1290.633361303" watchObservedRunningTime="2026-03-14 05:53:56.597107044 +0000 UTC m=+1290.635367790" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.706634 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:60360->10.217.0.148:9311: read: connection reset by peer" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.706707 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:60370->10.217.0.148:9311: read: connection reset by peer" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.709198 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:60380->10.217.0.148:9311: read: connection reset by peer" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.709641 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": dial tcp 10.217.0.148:9311: connect: connection refused" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.709752 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.709962 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d5789d498-qs55q" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": dial tcp 10.217.0.148:9311: connect: connection refused" Mar 14 05:53:57 crc kubenswrapper[4817]: I0314 05:53:57.710364 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.264753 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.431008 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6921187e-5058-45ef-9ba2-13a205560c11-logs\") pod \"6921187e-5058-45ef-9ba2-13a205560c11\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.431281 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data\") pod \"6921187e-5058-45ef-9ba2-13a205560c11\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.431328 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-combined-ca-bundle\") pod \"6921187e-5058-45ef-9ba2-13a205560c11\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.431366 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lplns\" (UniqueName: \"kubernetes.io/projected/6921187e-5058-45ef-9ba2-13a205560c11-kube-api-access-lplns\") pod \"6921187e-5058-45ef-9ba2-13a205560c11\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.431393 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data-custom\") pod \"6921187e-5058-45ef-9ba2-13a205560c11\" (UID: \"6921187e-5058-45ef-9ba2-13a205560c11\") " Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.431673 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6921187e-5058-45ef-9ba2-13a205560c11-logs" (OuterVolumeSpecName: "logs") pod "6921187e-5058-45ef-9ba2-13a205560c11" (UID: "6921187e-5058-45ef-9ba2-13a205560c11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.432038 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6921187e-5058-45ef-9ba2-13a205560c11-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.441527 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6921187e-5058-45ef-9ba2-13a205560c11-kube-api-access-lplns" (OuterVolumeSpecName: "kube-api-access-lplns") pod "6921187e-5058-45ef-9ba2-13a205560c11" (UID: "6921187e-5058-45ef-9ba2-13a205560c11"). InnerVolumeSpecName "kube-api-access-lplns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.449166 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6921187e-5058-45ef-9ba2-13a205560c11" (UID: "6921187e-5058-45ef-9ba2-13a205560c11"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.473131 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6921187e-5058-45ef-9ba2-13a205560c11" (UID: "6921187e-5058-45ef-9ba2-13a205560c11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.496560 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data" (OuterVolumeSpecName: "config-data") pod "6921187e-5058-45ef-9ba2-13a205560c11" (UID: "6921187e-5058-45ef-9ba2-13a205560c11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.534769 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.534845 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.534863 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lplns\" (UniqueName: \"kubernetes.io/projected/6921187e-5058-45ef-9ba2-13a205560c11-kube-api-access-lplns\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.534877 4817 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6921187e-5058-45ef-9ba2-13a205560c11-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.604874 4817 generic.go:334] "Generic (PLEG): container finished" podID="6921187e-5058-45ef-9ba2-13a205560c11" containerID="e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9" exitCode=0 Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.605009 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5789d498-qs55q" event={"ID":"6921187e-5058-45ef-9ba2-13a205560c11","Type":"ContainerDied","Data":"e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9"} Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.605063 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d5789d498-qs55q" event={"ID":"6921187e-5058-45ef-9ba2-13a205560c11","Type":"ContainerDied","Data":"020ef7df48ea9086d0cd7f2d14e31a4dba8f43ad3081dce8d25039320e08a8f7"} Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.605112 4817 scope.go:117] "RemoveContainer" containerID="e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.605511 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d5789d498-qs55q" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.648936 4817 scope.go:117] "RemoveContainer" containerID="abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.683871 4817 scope.go:117] "RemoveContainer" containerID="e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9" Mar 14 05:53:58 crc kubenswrapper[4817]: E0314 05:53:58.684819 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9\": container with ID starting with e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9 not found: ID does not exist" containerID="e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.684914 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9"} err="failed to get container status \"e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9\": rpc error: code = NotFound desc = could not find container \"e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9\": container with ID starting with e93e922befd74fab71d1aba8c62174e6fb2a8d09835183821d41ded63b9418e9 not found: ID does not exist" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.684948 4817 scope.go:117] "RemoveContainer" containerID="abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.685688 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d5789d498-qs55q"] Mar 14 05:53:58 crc kubenswrapper[4817]: E0314 05:53:58.686165 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11\": container with ID starting with abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11 not found: ID does not exist" containerID="abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.686232 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11"} err="failed to get container status \"abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11\": rpc error: code = NotFound desc = could not find container \"abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11\": container with ID starting with abfb263c6df30fd4861e7af8c6821bb275161065044e0222acd79e15f3194a11 not found: ID does not exist" Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.698477 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d5789d498-qs55q"] Mar 14 05:53:58 crc kubenswrapper[4817]: I0314 05:53:58.766283 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6921187e-5058-45ef-9ba2-13a205560c11" path="/var/lib/kubelet/pods/6921187e-5058-45ef-9ba2-13a205560c11/volumes" Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.125307 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.206854 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-5xpnr"] Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.207220 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerName="dnsmasq-dns" containerID="cri-o://b9df2c54dba06e7ad1e2e1394b10ef5e298f1a6234cc69f78a627bcdecd055d8" gracePeriod=10 Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.382279 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.500889 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.621851 4817 generic.go:334] "Generic (PLEG): container finished" podID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerID="b9df2c54dba06e7ad1e2e1394b10ef5e298f1a6234cc69f78a627bcdecd055d8" exitCode=0 Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.621978 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" event={"ID":"4f526d8c-6e78-4c4a-9528-88006e41d2d7","Type":"ContainerDied","Data":"b9df2c54dba06e7ad1e2e1394b10ef5e298f1a6234cc69f78a627bcdecd055d8"} Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.622098 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="cinder-scheduler" containerID="cri-o://2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8" gracePeriod=30 Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.622190 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="probe" containerID="cri-o://315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799" gracePeriod=30 Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.815933 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.970935 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-dns-svc\") pod \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.971051 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgrmb\" (UniqueName: \"kubernetes.io/projected/4f526d8c-6e78-4c4a-9528-88006e41d2d7-kube-api-access-bgrmb\") pod \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.971163 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-config\") pod \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.971245 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-nb\") pod \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.971321 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-sb\") pod \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\" (UID: \"4f526d8c-6e78-4c4a-9528-88006e41d2d7\") " Mar 14 05:53:59 crc kubenswrapper[4817]: I0314 05:53:59.982852 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f526d8c-6e78-4c4a-9528-88006e41d2d7-kube-api-access-bgrmb" (OuterVolumeSpecName: "kube-api-access-bgrmb") pod "4f526d8c-6e78-4c4a-9528-88006e41d2d7" (UID: "4f526d8c-6e78-4c4a-9528-88006e41d2d7"). InnerVolumeSpecName "kube-api-access-bgrmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.052822 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-config" (OuterVolumeSpecName: "config") pod "4f526d8c-6e78-4c4a-9528-88006e41d2d7" (UID: "4f526d8c-6e78-4c4a-9528-88006e41d2d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.063236 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f526d8c-6e78-4c4a-9528-88006e41d2d7" (UID: "4f526d8c-6e78-4c4a-9528-88006e41d2d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.067610 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f526d8c-6e78-4c4a-9528-88006e41d2d7" (UID: "4f526d8c-6e78-4c4a-9528-88006e41d2d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.078659 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgrmb\" (UniqueName: \"kubernetes.io/projected/4f526d8c-6e78-4c4a-9528-88006e41d2d7-kube-api-access-bgrmb\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.078720 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.078745 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.078756 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.094167 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f526d8c-6e78-4c4a-9528-88006e41d2d7" (UID: "4f526d8c-6e78-4c4a-9528-88006e41d2d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.166057 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557794-ztjmq"] Mar 14 05:54:00 crc kubenswrapper[4817]: E0314 05:54:00.166710 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.166736 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" Mar 14 05:54:00 crc kubenswrapper[4817]: E0314 05:54:00.166760 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.166767 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" Mar 14 05:54:00 crc kubenswrapper[4817]: E0314 05:54:00.166799 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerName="init" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.166806 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerName="init" Mar 14 05:54:00 crc kubenswrapper[4817]: E0314 05:54:00.166823 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerName="dnsmasq-dns" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.166832 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerName="dnsmasq-dns" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.167026 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api-log" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.167052 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6921187e-5058-45ef-9ba2-13a205560c11" containerName="barbican-api" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.167068 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" containerName="dnsmasq-dns" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.167952 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.172267 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.172465 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.172646 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.182111 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lhh\" (UniqueName: \"kubernetes.io/projected/c3277bee-fe8a-4fa1-a005-cc16f69665b7-kube-api-access-j9lhh\") pod \"auto-csr-approver-29557794-ztjmq\" (UID: \"c3277bee-fe8a-4fa1-a005-cc16f69665b7\") " pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.182224 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f526d8c-6e78-4c4a-9528-88006e41d2d7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.191884 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557794-ztjmq"] Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.283952 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lhh\" (UniqueName: \"kubernetes.io/projected/c3277bee-fe8a-4fa1-a005-cc16f69665b7-kube-api-access-j9lhh\") pod \"auto-csr-approver-29557794-ztjmq\" (UID: \"c3277bee-fe8a-4fa1-a005-cc16f69665b7\") " pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.303323 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lhh\" (UniqueName: \"kubernetes.io/projected/c3277bee-fe8a-4fa1-a005-cc16f69665b7-kube-api-access-j9lhh\") pod \"auto-csr-approver-29557794-ztjmq\" (UID: \"c3277bee-fe8a-4fa1-a005-cc16f69665b7\") " pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.564978 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.637451 4817 generic.go:334] "Generic (PLEG): container finished" podID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerID="315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799" exitCode=0 Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.637675 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9de96af-67da-4521-9f6f-f6dfadb3b270","Type":"ContainerDied","Data":"315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799"} Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.647994 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" event={"ID":"4f526d8c-6e78-4c4a-9528-88006e41d2d7","Type":"ContainerDied","Data":"d7a347e9112649b2c7cf48820478cefc797131405ca52a47d004ee5b83da341f"} Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.648094 4817 scope.go:117] "RemoveContainer" containerID="b9df2c54dba06e7ad1e2e1394b10ef5e298f1a6234cc69f78a627bcdecd055d8" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.648092 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-5xpnr" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.695005 4817 scope.go:117] "RemoveContainer" containerID="520b4a900e8db44a5b1cfb60e5017326cdadfeb902e37e5f41afa9955b4be6f6" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.706513 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-5xpnr"] Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.719000 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-5xpnr"] Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.817313 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f526d8c-6e78-4c4a-9528-88006e41d2d7" path="/var/lib/kubelet/pods/4f526d8c-6e78-4c4a-9528-88006e41d2d7/volumes" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.932884 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.939526 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.945948 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.946307 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7dnzf" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.954397 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 14 05:54:00 crc kubenswrapper[4817]: I0314 05:54:00.982699 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.105502 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9gk\" (UniqueName: \"kubernetes.io/projected/30bca2b3-da0c-4f63-b9fb-95c742af358e-kube-api-access-cf9gk\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.105633 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30bca2b3-da0c-4f63-b9fb-95c742af358e-openstack-config-secret\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.105661 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/30bca2b3-da0c-4f63-b9fb-95c742af358e-openstack-config\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.105717 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bca2b3-da0c-4f63-b9fb-95c742af358e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.201862 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557794-ztjmq"] Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.208857 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30bca2b3-da0c-4f63-b9fb-95c742af358e-openstack-config-secret\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.208928 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/30bca2b3-da0c-4f63-b9fb-95c742af358e-openstack-config\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.208997 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bca2b3-da0c-4f63-b9fb-95c742af358e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.209092 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9gk\" (UniqueName: \"kubernetes.io/projected/30bca2b3-da0c-4f63-b9fb-95c742af358e-kube-api-access-cf9gk\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.218282 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/30bca2b3-da0c-4f63-b9fb-95c742af358e-openstack-config\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.231956 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/30bca2b3-da0c-4f63-b9fb-95c742af358e-openstack-config-secret\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.234100 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bca2b3-da0c-4f63-b9fb-95c742af358e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.257907 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9gk\" (UniqueName: \"kubernetes.io/projected/30bca2b3-da0c-4f63-b9fb-95c742af358e-kube-api-access-cf9gk\") pod \"openstackclient\" (UID: \"30bca2b3-da0c-4f63-b9fb-95c742af358e\") " pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.283863 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.567839 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.661482 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" event={"ID":"c3277bee-fe8a-4fa1-a005-cc16f69665b7","Type":"ContainerStarted","Data":"8980236c96cef878f27f8fc834dbb03c2fdf2a5cfc7f42f558efbb60dabe62e8"} Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.668689 4817 generic.go:334] "Generic (PLEG): container finished" podID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerID="2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8" exitCode=0 Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.668776 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.668758 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9de96af-67da-4521-9f6f-f6dfadb3b270","Type":"ContainerDied","Data":"2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8"} Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.668885 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d9de96af-67da-4521-9f6f-f6dfadb3b270","Type":"ContainerDied","Data":"7f1d1d7bd2587cd9d15ddabe57fa770f46fab72d4096651ce88a869f0b07fd5f"} Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.668940 4817 scope.go:117] "RemoveContainer" containerID="315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.706031 4817 scope.go:117] "RemoveContainer" containerID="2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.719550 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data\") pod \"d9de96af-67da-4521-9f6f-f6dfadb3b270\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.719729 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9de96af-67da-4521-9f6f-f6dfadb3b270-etc-machine-id\") pod \"d9de96af-67da-4521-9f6f-f6dfadb3b270\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.719871 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data-custom\") pod \"d9de96af-67da-4521-9f6f-f6dfadb3b270\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.719992 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvcdp\" (UniqueName: \"kubernetes.io/projected/d9de96af-67da-4521-9f6f-f6dfadb3b270-kube-api-access-lvcdp\") pod \"d9de96af-67da-4521-9f6f-f6dfadb3b270\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.720030 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-combined-ca-bundle\") pod \"d9de96af-67da-4521-9f6f-f6dfadb3b270\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.720055 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-scripts\") pod \"d9de96af-67da-4521-9f6f-f6dfadb3b270\" (UID: \"d9de96af-67da-4521-9f6f-f6dfadb3b270\") " Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.721354 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9de96af-67da-4521-9f6f-f6dfadb3b270-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d9de96af-67da-4521-9f6f-f6dfadb3b270" (UID: "d9de96af-67da-4521-9f6f-f6dfadb3b270"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.729038 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d9de96af-67da-4521-9f6f-f6dfadb3b270" (UID: "d9de96af-67da-4521-9f6f-f6dfadb3b270"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.741079 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-scripts" (OuterVolumeSpecName: "scripts") pod "d9de96af-67da-4521-9f6f-f6dfadb3b270" (UID: "d9de96af-67da-4521-9f6f-f6dfadb3b270"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.741117 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9de96af-67da-4521-9f6f-f6dfadb3b270-kube-api-access-lvcdp" (OuterVolumeSpecName: "kube-api-access-lvcdp") pod "d9de96af-67da-4521-9f6f-f6dfadb3b270" (UID: "d9de96af-67da-4521-9f6f-f6dfadb3b270"). InnerVolumeSpecName "kube-api-access-lvcdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.756606 4817 scope.go:117] "RemoveContainer" containerID="315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799" Mar 14 05:54:01 crc kubenswrapper[4817]: E0314 05:54:01.757574 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799\": container with ID starting with 315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799 not found: ID does not exist" containerID="315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.757602 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799"} err="failed to get container status \"315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799\": rpc error: code = NotFound desc = could not find container \"315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799\": container with ID starting with 315ebbe0ce36c24082240daeed227da4a375c3f1acf698a605863b0878392799 not found: ID does not exist" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.757625 4817 scope.go:117] "RemoveContainer" containerID="2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8" Mar 14 05:54:01 crc kubenswrapper[4817]: E0314 05:54:01.757888 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8\": container with ID starting with 2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8 not found: ID does not exist" containerID="2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.757948 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8"} err="failed to get container status \"2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8\": rpc error: code = NotFound desc = could not find container \"2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8\": container with ID starting with 2f1ea53a345bb8b4d2f1aa32a3da1d70639dfad1d3f46bbb7bf73f01f855cba8 not found: ID does not exist" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.785047 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9de96af-67da-4521-9f6f-f6dfadb3b270" (UID: "d9de96af-67da-4521-9f6f-f6dfadb3b270"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.822449 4817 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9de96af-67da-4521-9f6f-f6dfadb3b270-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.822502 4817 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.822518 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvcdp\" (UniqueName: \"kubernetes.io/projected/d9de96af-67da-4521-9f6f-f6dfadb3b270-kube-api-access-lvcdp\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.822532 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.822545 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.841212 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data" (OuterVolumeSpecName: "config-data") pod "d9de96af-67da-4521-9f6f-f6dfadb3b270" (UID: "d9de96af-67da-4521-9f6f-f6dfadb3b270"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.898640 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 14 05:54:01 crc kubenswrapper[4817]: W0314 05:54:01.905125 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bca2b3_da0c_4f63_b9fb_95c742af358e.slice/crio-5c84c8c841c6ff03d2dc066457235d5a3f37feaa633f1632ad8173f9406b780c WatchSource:0}: Error finding container 5c84c8c841c6ff03d2dc066457235d5a3f37feaa633f1632ad8173f9406b780c: Status 404 returned error can't find the container with id 5c84c8c841c6ff03d2dc066457235d5a3f37feaa633f1632ad8173f9406b780c Mar 14 05:54:01 crc kubenswrapper[4817]: I0314 05:54:01.924384 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9de96af-67da-4521-9f6f-f6dfadb3b270-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.137328 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.161375 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.174582 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:54:02 crc kubenswrapper[4817]: E0314 05:54:02.177153 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="cinder-scheduler" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.177206 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="cinder-scheduler" Mar 14 05:54:02 crc kubenswrapper[4817]: E0314 05:54:02.177320 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="probe" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.177332 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="probe" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.177775 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="cinder-scheduler" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.177792 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" containerName="probe" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.179154 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.182477 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.254168 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.335760 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.335841 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.336011 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n4ms\" (UniqueName: \"kubernetes.io/projected/2a37bd39-17a6-4c93-8146-b694d6e30b37-kube-api-access-9n4ms\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.336367 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.336494 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.336641 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a37bd39-17a6-4c93-8146-b694d6e30b37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.438556 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a37bd39-17a6-4c93-8146-b694d6e30b37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.439099 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.438699 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a37bd39-17a6-4c93-8146-b694d6e30b37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.439139 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.439261 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n4ms\" (UniqueName: \"kubernetes.io/projected/2a37bd39-17a6-4c93-8146-b694d6e30b37-kube-api-access-9n4ms\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.439502 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.439606 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.448226 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.448227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.448649 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.449055 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a37bd39-17a6-4c93-8146-b694d6e30b37-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.460167 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n4ms\" (UniqueName: \"kubernetes.io/projected/2a37bd39-17a6-4c93-8146-b694d6e30b37-kube-api-access-9n4ms\") pod \"cinder-scheduler-0\" (UID: \"2a37bd39-17a6-4c93-8146-b694d6e30b37\") " pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.517092 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 05:54:02 crc kubenswrapper[4817]: I0314 05:54:02.542693 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 14 05:54:03 crc kubenswrapper[4817]: I0314 05:54:03.101352 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9de96af-67da-4521-9f6f-f6dfadb3b270" path="/var/lib/kubelet/pods/d9de96af-67da-4521-9f6f-f6dfadb3b270/volumes" Mar 14 05:54:03 crc kubenswrapper[4817]: I0314 05:54:03.103053 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"30bca2b3-da0c-4f63-b9fb-95c742af358e","Type":"ContainerStarted","Data":"5c84c8c841c6ff03d2dc066457235d5a3f37feaa633f1632ad8173f9406b780c"} Mar 14 05:54:03 crc kubenswrapper[4817]: I0314 05:54:03.520392 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 05:54:04 crc kubenswrapper[4817]: I0314 05:54:04.430542 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a37bd39-17a6-4c93-8146-b694d6e30b37","Type":"ContainerStarted","Data":"8e122ea958fddbfc0d859c75a8d9484aa646987efaf8b64eff3a2053d6cd45d4"} Mar 14 05:54:06 crc kubenswrapper[4817]: I0314 05:54:06.880834 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a37bd39-17a6-4c93-8146-b694d6e30b37","Type":"ContainerStarted","Data":"e4b9c666c65fe6263cbe97c592523f49b83d0fe6912cc81040528a3927b7d277"} Mar 14 05:54:06 crc kubenswrapper[4817]: I0314 05:54:06.894542 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" event={"ID":"c3277bee-fe8a-4fa1-a005-cc16f69665b7","Type":"ContainerStarted","Data":"32fa62abf72f73eceea4d7c9ad5e88badb95adf335d50f4b69180e1d166ba8d9"} Mar 14 05:54:07 crc kubenswrapper[4817]: I0314 05:54:07.083066 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" podStartSLOduration=3.898528501 podStartE2EDuration="7.083037787s" podCreationTimestamp="2026-03-14 05:54:00 +0000 UTC" firstStartedPulling="2026-03-14 05:54:01.219570238 +0000 UTC m=+1295.257830994" lastFinishedPulling="2026-03-14 05:54:04.404079534 +0000 UTC m=+1298.442340280" observedRunningTime="2026-03-14 05:54:07.079503605 +0000 UTC m=+1301.117764351" watchObservedRunningTime="2026-03-14 05:54:07.083037787 +0000 UTC m=+1301.121298533" Mar 14 05:54:07 crc kubenswrapper[4817]: I0314 05:54:07.930402 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a37bd39-17a6-4c93-8146-b694d6e30b37","Type":"ContainerStarted","Data":"e40e05a26a91b909054345404bc4a1f0836bb185bd1cb23a141b441fdbc068a6"} Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:07.953421 4817 generic.go:334] "Generic (PLEG): container finished" podID="c3277bee-fe8a-4fa1-a005-cc16f69665b7" containerID="32fa62abf72f73eceea4d7c9ad5e88badb95adf335d50f4b69180e1d166ba8d9" exitCode=0 Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:07.953478 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" event={"ID":"c3277bee-fe8a-4fa1-a005-cc16f69665b7","Type":"ContainerDied","Data":"32fa62abf72f73eceea4d7c9ad5e88badb95adf335d50f4b69180e1d166ba8d9"} Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:08.121348 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.121320264 podStartE2EDuration="6.121320264s" podCreationTimestamp="2026-03-14 05:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:54:08.108939088 +0000 UTC m=+1302.147199854" watchObservedRunningTime="2026-03-14 05:54:08.121320264 +0000 UTC m=+1302.159581010" Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:08.565576 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:08.565659 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:08.565710 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:08.566570 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cedc780ac8b4d762839f86c77e2e11ff9cb9f77222802713452641e56fdcbca"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:54:08 crc kubenswrapper[4817]: I0314 05:54:08.566632 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://4cedc780ac8b4d762839f86c77e2e11ff9cb9f77222802713452641e56fdcbca" gracePeriod=600 Mar 14 05:54:09 crc kubenswrapper[4817]: I0314 05:54:09.636613 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.155:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.091540 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="4cedc780ac8b4d762839f86c77e2e11ff9cb9f77222802713452641e56fdcbca" exitCode=0 Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.119541 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"4cedc780ac8b4d762839f86c77e2e11ff9cb9f77222802713452641e56fdcbca"} Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.119608 4817 scope.go:117] "RemoveContainer" containerID="410879e5dd288fd6afed9ea2c23e57c34cba5d0fba30b068075ef7767158e5fe" Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.207573 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.311629 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lhh\" (UniqueName: \"kubernetes.io/projected/c3277bee-fe8a-4fa1-a005-cc16f69665b7-kube-api-access-j9lhh\") pod \"c3277bee-fe8a-4fa1-a005-cc16f69665b7\" (UID: \"c3277bee-fe8a-4fa1-a005-cc16f69665b7\") " Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.321153 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3277bee-fe8a-4fa1-a005-cc16f69665b7-kube-api-access-j9lhh" (OuterVolumeSpecName: "kube-api-access-j9lhh") pod "c3277bee-fe8a-4fa1-a005-cc16f69665b7" (UID: "c3277bee-fe8a-4fa1-a005-cc16f69665b7"). InnerVolumeSpecName "kube-api-access-j9lhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.324555 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.350720 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-67dfb54788-qqrtk" Mar 14 05:54:11 crc kubenswrapper[4817]: I0314 05:54:11.449198 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lhh\" (UniqueName: \"kubernetes.io/projected/c3277bee-fe8a-4fa1-a005-cc16f69665b7-kube-api-access-j9lhh\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.103138 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"8509fe209e0d6b6f9ac3932c37444f6f3b02987e960352c4af8c1492f53dab1b"} Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.107117 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" event={"ID":"c3277bee-fe8a-4fa1-a005-cc16f69665b7","Type":"ContainerDied","Data":"8980236c96cef878f27f8fc834dbb03c2fdf2a5cfc7f42f558efbb60dabe62e8"} Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.107199 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8980236c96cef878f27f8fc834dbb03c2fdf2a5cfc7f42f558efbb60dabe62e8" Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.107141 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557794-ztjmq" Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.391451 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557788-l2jj8"] Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.400426 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557788-l2jj8"] Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.517803 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.752610 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eda5581-2c9a-4a68-8bd5-e595b74b941d" path="/var/lib/kubelet/pods/5eda5581-2c9a-4a68-8bd5-e595b74b941d/volumes" Mar 14 05:54:12 crc kubenswrapper[4817]: I0314 05:54:12.930111 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 14 05:54:14 crc kubenswrapper[4817]: I0314 05:54:14.331005 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 05:54:18 crc kubenswrapper[4817]: I0314 05:54:18.025822 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:54:18 crc kubenswrapper[4817]: I0314 05:54:18.464589 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:54:18 crc kubenswrapper[4817]: I0314 05:54:18.465409 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="480e01af-fe53-4341-9987-b53552c7b77f" containerName="kube-state-metrics" containerID="cri-o://d36f4f8082cf3d4d2972ff1acee7869e2ee239fff5708afd8dc0e170648fe404" gracePeriod=30 Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.178852 4817 generic.go:334] "Generic (PLEG): container finished" podID="480e01af-fe53-4341-9987-b53552c7b77f" containerID="d36f4f8082cf3d4d2972ff1acee7869e2ee239fff5708afd8dc0e170648fe404" exitCode=2 Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.178954 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"480e01af-fe53-4341-9987-b53552c7b77f","Type":"ContainerDied","Data":"d36f4f8082cf3d4d2972ff1acee7869e2ee239fff5708afd8dc0e170648fe404"} Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.675590 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.675998 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-central-agent" containerID="cri-o://f5d8a669e8609ac990c4e24ffe65c714c013ce07fb450a5d9e827508fc1da827" gracePeriod=30 Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.678148 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="proxy-httpd" containerID="cri-o://f9baa29b6ec9a9ef2b8912f51bde426314d5e03ab2aef9eb2db463f1f38255a3" gracePeriod=30 Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.678329 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-notification-agent" containerID="cri-o://bb8fb08a199b85e5cf72c70d43f78c977e906fdb2a5ccef9c6d8c7ee45eb6d5c" gracePeriod=30 Mar 14 05:54:19 crc kubenswrapper[4817]: I0314 05:54:19.678385 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="sg-core" containerID="cri-o://1aea90580413b4215ff12e0314af7a1894f46dffc6ad184b3159f677a11526e2" gracePeriod=30 Mar 14 05:54:20 crc kubenswrapper[4817]: I0314 05:54:20.207521 4817 generic.go:334] "Generic (PLEG): container finished" podID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerID="f9baa29b6ec9a9ef2b8912f51bde426314d5e03ab2aef9eb2db463f1f38255a3" exitCode=0 Mar 14 05:54:20 crc kubenswrapper[4817]: I0314 05:54:20.208041 4817 generic.go:334] "Generic (PLEG): container finished" podID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerID="1aea90580413b4215ff12e0314af7a1894f46dffc6ad184b3159f677a11526e2" exitCode=2 Mar 14 05:54:20 crc kubenswrapper[4817]: I0314 05:54:20.207745 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerDied","Data":"f9baa29b6ec9a9ef2b8912f51bde426314d5e03ab2aef9eb2db463f1f38255a3"} Mar 14 05:54:20 crc kubenswrapper[4817]: I0314 05:54:20.208100 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerDied","Data":"1aea90580413b4215ff12e0314af7a1894f46dffc6ad184b3159f677a11526e2"} Mar 14 05:54:21 crc kubenswrapper[4817]: I0314 05:54:21.222424 4817 generic.go:334] "Generic (PLEG): container finished" podID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerID="bb8fb08a199b85e5cf72c70d43f78c977e906fdb2a5ccef9c6d8c7ee45eb6d5c" exitCode=0 Mar 14 05:54:21 crc kubenswrapper[4817]: I0314 05:54:21.222469 4817 generic.go:334] "Generic (PLEG): container finished" podID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerID="f5d8a669e8609ac990c4e24ffe65c714c013ce07fb450a5d9e827508fc1da827" exitCode=0 Mar 14 05:54:21 crc kubenswrapper[4817]: I0314 05:54:21.222501 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerDied","Data":"bb8fb08a199b85e5cf72c70d43f78c977e906fdb2a5ccef9c6d8c7ee45eb6d5c"} Mar 14 05:54:21 crc kubenswrapper[4817]: I0314 05:54:21.222539 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerDied","Data":"f5d8a669e8609ac990c4e24ffe65c714c013ce07fb450a5d9e827508fc1da827"} Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.256603 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.262529 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"480e01af-fe53-4341-9987-b53552c7b77f","Type":"ContainerDied","Data":"954382abe2b87fecbbe41e60074c8de6496242dcfebd8322a252d38eac2ca381"} Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.262597 4817 scope.go:117] "RemoveContainer" containerID="d36f4f8082cf3d4d2972ff1acee7869e2ee239fff5708afd8dc0e170648fe404" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.263057 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.273223 4817 generic.go:334] "Generic (PLEG): container finished" podID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerID="1729035a49e67beae40684181ace5f2f5a3c64facc3b297b3420aef969393d55" exitCode=137 Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.273366 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ccee9da-6ff0-4b19-a97b-8fc7071782d6","Type":"ContainerDied","Data":"1729035a49e67beae40684181ace5f2f5a3c64facc3b297b3420aef969393d55"} Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.278645 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9czd\" (UniqueName: \"kubernetes.io/projected/480e01af-fe53-4341-9987-b53552c7b77f-kube-api-access-x9czd\") pod \"480e01af-fe53-4341-9987-b53552c7b77f\" (UID: \"480e01af-fe53-4341-9987-b53552c7b77f\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.312236 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480e01af-fe53-4341-9987-b53552c7b77f-kube-api-access-x9czd" (OuterVolumeSpecName: "kube-api-access-x9czd") pod "480e01af-fe53-4341-9987-b53552c7b77f" (UID: "480e01af-fe53-4341-9987-b53552c7b77f"). InnerVolumeSpecName "kube-api-access-x9czd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.381495 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9czd\" (UniqueName: \"kubernetes.io/projected/480e01af-fe53-4341-9987-b53552c7b77f-kube-api-access-x9czd\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.443372 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.445318 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.482849 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data-custom\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483002 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-sg-core-conf-yaml\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483033 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-run-httpd\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483076 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-scripts\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483137 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4gsv\" (UniqueName: \"kubernetes.io/projected/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-kube-api-access-h4gsv\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483200 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-logs\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483234 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5hx\" (UniqueName: \"kubernetes.io/projected/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-kube-api-access-jz5hx\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483260 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-config-data\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483345 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-combined-ca-bundle\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483378 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-etc-machine-id\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483449 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-log-httpd\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483498 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-combined-ca-bundle\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483541 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data\") pod \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\" (UID: \"5ccee9da-6ff0-4b19-a97b-8fc7071782d6\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.483600 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-scripts\") pod \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\" (UID: \"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e\") " Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.484286 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.484725 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.490586 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-logs" (OuterVolumeSpecName: "logs") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.492660 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.502856 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-kube-api-access-h4gsv" (OuterVolumeSpecName: "kube-api-access-h4gsv") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "kube-api-access-h4gsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.502868 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-scripts" (OuterVolumeSpecName: "scripts") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.502932 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.503883 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-kube-api-access-jz5hx" (OuterVolumeSpecName: "kube-api-access-jz5hx") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "kube-api-access-jz5hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.508438 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-scripts" (OuterVolumeSpecName: "scripts") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.551017 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.574661 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.586015 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.586420 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5hx\" (UniqueName: \"kubernetes.io/projected/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-kube-api-access-jz5hx\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.586508 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.586582 4817 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.586701 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.586778 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.587026 4817 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.587113 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.587190 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.587276 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.587411 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4gsv\" (UniqueName: \"kubernetes.io/projected/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-kube-api-access-h4gsv\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.603776 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data" (OuterVolumeSpecName: "config-data") pod "5ccee9da-6ff0-4b19-a97b-8fc7071782d6" (UID: "5ccee9da-6ff0-4b19-a97b-8fc7071782d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.623611 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.629220 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.659535 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.678511 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679085 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3277bee-fe8a-4fa1-a005-cc16f69665b7" containerName="oc" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679107 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3277bee-fe8a-4fa1-a005-cc16f69665b7" containerName="oc" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679124 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="sg-core" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679131 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="sg-core" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679146 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-central-agent" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679153 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-central-agent" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679161 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="proxy-httpd" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679169 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="proxy-httpd" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679177 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api-log" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679183 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api-log" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679205 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480e01af-fe53-4341-9987-b53552c7b77f" containerName="kube-state-metrics" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679213 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="480e01af-fe53-4341-9987-b53552c7b77f" containerName="kube-state-metrics" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679225 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-notification-agent" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679231 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-notification-agent" Mar 14 05:54:24 crc kubenswrapper[4817]: E0314 05:54:24.679243 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679250 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679705 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="proxy-httpd" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679810 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-central-agent" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679821 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3277bee-fe8a-4fa1-a005-cc16f69665b7" containerName="oc" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679828 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="480e01af-fe53-4341-9987-b53552c7b77f" containerName="kube-state-metrics" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679837 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="ceilometer-notification-agent" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679846 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" containerName="sg-core" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679880 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.679921 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api-log" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.680971 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.683767 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.685210 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.692231 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbst8\" (UniqueName: \"kubernetes.io/projected/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-api-access-sbst8\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.692295 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.692909 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.692994 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.693082 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.693102 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccee9da-6ff0-4b19-a97b-8fc7071782d6-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.696924 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.726798 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-config-data" (OuterVolumeSpecName: "config-data") pod "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" (UID: "c3d2f3d0-d087-4aa2-86bb-aa46f576f23e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.798239 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbst8\" (UniqueName: \"kubernetes.io/projected/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-api-access-sbst8\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.798620 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.798762 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.798977 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.813415 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480e01af-fe53-4341-9987-b53552c7b77f" path="/var/lib/kubelet/pods/480e01af-fe53-4341-9987-b53552c7b77f/volumes" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.814072 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.842479 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbst8\" (UniqueName: \"kubernetes.io/projected/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-api-access-sbst8\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.845944 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.846426 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-549548697-x46rl" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.867270 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.868245 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/87a2e6ce-41f8-473e-ba22-d038bbef1de2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"87a2e6ce-41f8-473e-ba22-d038bbef1de2\") " pod="openstack/kube-state-metrics-0" Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.928626 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cffdb9bc8-nfp5r"] Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.929672 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cffdb9bc8-nfp5r" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-httpd" containerID="cri-o://2ccd3ec57d75d11d5b9b809695c8cad452f5db007222cf9faad62d5f4b827fdc" gracePeriod=30 Mar 14 05:54:24 crc kubenswrapper[4817]: I0314 05:54:24.937979 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cffdb9bc8-nfp5r" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-api" containerID="cri-o://6813fa994a03d06f9c90e9d2976f859fb8358f6e42e61538c91037d211e9b2f3" gracePeriod=30 Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.019323 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.286261 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3d2f3d0-d087-4aa2-86bb-aa46f576f23e","Type":"ContainerDied","Data":"4b50af39f620b7107ab32687e3aef828417ddea6ddfd614ce859e566a4989ef2"} Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.286725 4817 scope.go:117] "RemoveContainer" containerID="f9baa29b6ec9a9ef2b8912f51bde426314d5e03ab2aef9eb2db463f1f38255a3" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.286655 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.294273 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ccee9da-6ff0-4b19-a97b-8fc7071782d6","Type":"ContainerDied","Data":"0bf4fd3cd1d0bcdb4eb1b112bd9c732a3e38c3017bad964b9cd52aa748a01e1f"} Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.294333 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.352982 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.369944 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.386616 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.414054 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.429996 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.432568 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.437449 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.437476 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.437647 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.449937 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.466973 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.469279 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.477727 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.478205 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.478314 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.481072 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.508992 4817 scope.go:117] "RemoveContainer" containerID="1aea90580413b4215ff12e0314af7a1894f46dffc6ad184b3159f677a11526e2" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529492 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-run-httpd\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529553 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-config-data\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529596 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529626 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937e0c39-f135-482a-b4f8-388fbd9a11bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529679 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-scripts\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529715 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529775 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529820 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937e0c39-f135-482a-b4f8-388fbd9a11bd-logs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529852 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-log-httpd\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529880 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529922 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kn76\" (UniqueName: \"kubernetes.io/projected/26c2a04c-15c9-4ad3-8e16-274ae01138a1-kube-api-access-8kn76\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529956 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pslm4\" (UniqueName: \"kubernetes.io/projected/937e0c39-f135-482a-b4f8-388fbd9a11bd-kube-api-access-pslm4\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.529988 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-scripts\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.530142 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-config-data\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.530265 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.530331 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.530390 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.539597 4817 scope.go:117] "RemoveContainer" containerID="bb8fb08a199b85e5cf72c70d43f78c977e906fdb2a5ccef9c6d8c7ee45eb6d5c" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.567235 4817 scope.go:117] "RemoveContainer" containerID="f5d8a669e8609ac990c4e24ffe65c714c013ce07fb450a5d9e827508fc1da827" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.584037 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.599466 4817 scope.go:117] "RemoveContainer" containerID="1729035a49e67beae40684181ace5f2f5a3c64facc3b297b3420aef969393d55" Mar 14 05:54:25 crc kubenswrapper[4817]: W0314 05:54:25.607298 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87a2e6ce_41f8_473e_ba22_d038bbef1de2.slice/crio-1131f67ede767a680e648b820e3134cdd60ce01bd935dcd33e9280f46a2e3a25 WatchSource:0}: Error finding container 1131f67ede767a680e648b820e3134cdd60ce01bd935dcd33e9280f46a2e3a25: Status 404 returned error can't find the container with id 1131f67ede767a680e648b820e3134cdd60ce01bd935dcd33e9280f46a2e3a25 Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.621994 4817 scope.go:117] "RemoveContainer" containerID="472e0330ca44fea6162e9c93eaf714e74ace4febd3dd1f4a4c4a6848b166d1d8" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632203 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937e0c39-f135-482a-b4f8-388fbd9a11bd-logs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632249 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-log-httpd\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632272 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632301 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kn76\" (UniqueName: \"kubernetes.io/projected/26c2a04c-15c9-4ad3-8e16-274ae01138a1-kube-api-access-8kn76\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632334 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pslm4\" (UniqueName: \"kubernetes.io/projected/937e0c39-f135-482a-b4f8-388fbd9a11bd-kube-api-access-pslm4\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632360 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-scripts\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632379 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-config-data\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632399 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632417 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632440 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632464 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-run-httpd\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632482 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-config-data\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632507 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632527 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937e0c39-f135-482a-b4f8-388fbd9a11bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632560 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-scripts\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632588 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.632635 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.633969 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937e0c39-f135-482a-b4f8-388fbd9a11bd-logs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.634641 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-run-httpd\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.635019 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-log-httpd\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.635602 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/937e0c39-f135-482a-b4f8-388fbd9a11bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.645100 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.647048 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-scripts\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.647358 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.647440 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.647955 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.648143 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-config-data\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.648792 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.649279 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.649535 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937e0c39-f135-482a-b4f8-388fbd9a11bd-config-data\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.652042 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-scripts\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.653295 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.654600 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kn76\" (UniqueName: \"kubernetes.io/projected/26c2a04c-15c9-4ad3-8e16-274ae01138a1-kube-api-access-8kn76\") pod \"ceilometer-0\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.659766 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pslm4\" (UniqueName: \"kubernetes.io/projected/937e0c39-f135-482a-b4f8-388fbd9a11bd-kube-api-access-pslm4\") pod \"cinder-api-0\" (UID: \"937e0c39-f135-482a-b4f8-388fbd9a11bd\") " pod="openstack/cinder-api-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.769874 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:25 crc kubenswrapper[4817]: I0314 05:54:25.803390 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 05:54:26 crc kubenswrapper[4817]: I0314 05:54:26.332449 4817 generic.go:334] "Generic (PLEG): container finished" podID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerID="2ccd3ec57d75d11d5b9b809695c8cad452f5db007222cf9faad62d5f4b827fdc" exitCode=0 Mar 14 05:54:26 crc kubenswrapper[4817]: I0314 05:54:26.332912 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cffdb9bc8-nfp5r" event={"ID":"8d851269-5dc1-486c-9914-f8de2088dfc5","Type":"ContainerDied","Data":"2ccd3ec57d75d11d5b9b809695c8cad452f5db007222cf9faad62d5f4b827fdc"} Mar 14 05:54:26 crc kubenswrapper[4817]: I0314 05:54:26.334772 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"87a2e6ce-41f8-473e-ba22-d038bbef1de2","Type":"ContainerStarted","Data":"1131f67ede767a680e648b820e3134cdd60ce01bd935dcd33e9280f46a2e3a25"} Mar 14 05:54:26 crc kubenswrapper[4817]: I0314 05:54:26.396537 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 05:54:26 crc kubenswrapper[4817]: W0314 05:54:26.429494 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod937e0c39_f135_482a_b4f8_388fbd9a11bd.slice/crio-a483c2cf991f01b511c13f601b27b58b8af872eb0ecd192fcb181d02ac54f201 WatchSource:0}: Error finding container a483c2cf991f01b511c13f601b27b58b8af872eb0ecd192fcb181d02ac54f201: Status 404 returned error can't find the container with id a483c2cf991f01b511c13f601b27b58b8af872eb0ecd192fcb181d02ac54f201 Mar 14 05:54:26 crc kubenswrapper[4817]: I0314 05:54:26.507630 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.099608 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" path="/var/lib/kubelet/pods/5ccee9da-6ff0-4b19-a97b-8fc7071782d6/volumes" Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.101095 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d2f3d0-d087-4aa2-86bb-aa46f576f23e" path="/var/lib/kubelet/pods/c3d2f3d0-d087-4aa2-86bb-aa46f576f23e/volumes" Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.348828 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerStarted","Data":"58eaef97409b6376e9d93f6b577daa20e6f94e2f0f2cb91d66e7bb270d715ac2"} Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.355634 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"30bca2b3-da0c-4f63-b9fb-95c742af358e","Type":"ContainerStarted","Data":"24e34df6a948f276a58f92c56081db0e8df6452421bda787776ba0bf3a4a94e6"} Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.358279 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"937e0c39-f135-482a-b4f8-388fbd9a11bd","Type":"ContainerStarted","Data":"a483c2cf991f01b511c13f601b27b58b8af872eb0ecd192fcb181d02ac54f201"} Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.379735 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.409231871 podStartE2EDuration="27.379715727s" podCreationTimestamp="2026-03-14 05:54:00 +0000 UTC" firstStartedPulling="2026-03-14 05:54:01.907200081 +0000 UTC m=+1295.945460827" lastFinishedPulling="2026-03-14 05:54:23.877683937 +0000 UTC m=+1317.915944683" observedRunningTime="2026-03-14 05:54:27.373729865 +0000 UTC m=+1321.411990611" watchObservedRunningTime="2026-03-14 05:54:27.379715727 +0000 UTC m=+1321.417976473" Mar 14 05:54:27 crc kubenswrapper[4817]: I0314 05:54:27.788144 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:28 crc kubenswrapper[4817]: I0314 05:54:28.490109 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"937e0c39-f135-482a-b4f8-388fbd9a11bd","Type":"ContainerStarted","Data":"21ec932bb36aa3af1ab9b9de4c95844ee3e5c40ed27c2326f24f24142aa6551c"} Mar 14 05:54:28 crc kubenswrapper[4817]: I0314 05:54:28.494380 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"87a2e6ce-41f8-473e-ba22-d038bbef1de2","Type":"ContainerStarted","Data":"8698b50a22e18297f79372268cc8bc10e3ecee57abef0212355c7ace7b32e6eb"} Mar 14 05:54:28 crc kubenswrapper[4817]: I0314 05:54:28.494469 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 14 05:54:29 crc kubenswrapper[4817]: I0314 05:54:29.130044 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="5ccee9da-6ff0-4b19-a97b-8fc7071782d6" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.155:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.529084 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerStarted","Data":"425a93adf2cae2f5fc95e4b294a89dbc45c80f913c7abeba1bd462c75b9c81b1"} Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.532843 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"937e0c39-f135-482a-b4f8-388fbd9a11bd","Type":"ContainerStarted","Data":"6fc74de5c27a1542e8e85961f704bc4418d9a586fc4d96466c665be2ee2be96e"} Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.533333 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.562766 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.5627445049999995 podStartE2EDuration="5.562744505s" podCreationTimestamp="2026-03-14 05:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:54:30.55979506 +0000 UTC m=+1324.598055806" watchObservedRunningTime="2026-03-14 05:54:30.562744505 +0000 UTC m=+1324.601005251" Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.565365 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.609968753 podStartE2EDuration="6.56535687s" podCreationTimestamp="2026-03-14 05:54:24 +0000 UTC" firstStartedPulling="2026-03-14 05:54:25.622087171 +0000 UTC m=+1319.660347927" lastFinishedPulling="2026-03-14 05:54:26.577475298 +0000 UTC m=+1320.615736044" observedRunningTime="2026-03-14 05:54:28.515952966 +0000 UTC m=+1322.554213712" watchObservedRunningTime="2026-03-14 05:54:30.56535687 +0000 UTC m=+1324.603617616" Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.869505 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mtn8w"] Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.871403 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.902272 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mtn8w"] Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.947027 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mmkrb"] Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.948448 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:30 crc kubenswrapper[4817]: I0314 05:54:30.976191 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mmkrb"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.044309 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b608e883-8038-424c-aa1a-d5a7b23ba0bf-operator-scripts\") pod \"nova-api-db-create-mtn8w\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.044378 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d12d202-daac-4eb2-a09a-5c3a63251b85-operator-scripts\") pod \"nova-cell0-db-create-mmkrb\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.044427 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9mzz\" (UniqueName: \"kubernetes.io/projected/3d12d202-daac-4eb2-a09a-5c3a63251b85-kube-api-access-r9mzz\") pod \"nova-cell0-db-create-mmkrb\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.044489 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56cz\" (UniqueName: \"kubernetes.io/projected/b608e883-8038-424c-aa1a-d5a7b23ba0bf-kube-api-access-m56cz\") pod \"nova-api-db-create-mtn8w\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.083697 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1171-account-create-update-fgfss"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.085250 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c4bd4"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.086072 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.086623 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.097310 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1171-account-create-update-fgfss"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.097390 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c4bd4"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.290781 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56cz\" (UniqueName: \"kubernetes.io/projected/b608e883-8038-424c-aa1a-d5a7b23ba0bf-kube-api-access-m56cz\") pod \"nova-api-db-create-mtn8w\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.290880 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzg5\" (UniqueName: \"kubernetes.io/projected/689fbc67-9915-470c-a459-7f1787c26534-kube-api-access-8kzg5\") pod \"nova-api-1171-account-create-update-fgfss\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.290964 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b608e883-8038-424c-aa1a-d5a7b23ba0bf-operator-scripts\") pod \"nova-api-db-create-mtn8w\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.291001 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d12d202-daac-4eb2-a09a-5c3a63251b85-operator-scripts\") pod \"nova-cell0-db-create-mmkrb\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.291057 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9mzz\" (UniqueName: \"kubernetes.io/projected/3d12d202-daac-4eb2-a09a-5c3a63251b85-kube-api-access-r9mzz\") pod \"nova-cell0-db-create-mmkrb\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.291085 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689fbc67-9915-470c-a459-7f1787c26534-operator-scripts\") pod \"nova-api-1171-account-create-update-fgfss\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.293533 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b608e883-8038-424c-aa1a-d5a7b23ba0bf-operator-scripts\") pod \"nova-api-db-create-mtn8w\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.298098 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d12d202-daac-4eb2-a09a-5c3a63251b85-operator-scripts\") pod \"nova-cell0-db-create-mmkrb\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.368414 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.389032 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56cz\" (UniqueName: \"kubernetes.io/projected/b608e883-8038-424c-aa1a-d5a7b23ba0bf-kube-api-access-m56cz\") pod \"nova-api-db-create-mtn8w\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.393529 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrl8\" (UniqueName: \"kubernetes.io/projected/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-kube-api-access-ksrl8\") pod \"nova-cell1-db-create-c4bd4\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.393607 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-operator-scripts\") pod \"nova-cell1-db-create-c4bd4\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.393643 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kzg5\" (UniqueName: \"kubernetes.io/projected/689fbc67-9915-470c-a459-7f1787c26534-kube-api-access-8kzg5\") pod \"nova-api-1171-account-create-update-fgfss\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.393762 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689fbc67-9915-470c-a459-7f1787c26534-operator-scripts\") pod \"nova-api-1171-account-create-update-fgfss\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.394695 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689fbc67-9915-470c-a459-7f1787c26534-operator-scripts\") pod \"nova-api-1171-account-create-update-fgfss\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.427591 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9mzz\" (UniqueName: \"kubernetes.io/projected/3d12d202-daac-4eb2-a09a-5c3a63251b85-kube-api-access-r9mzz\") pod \"nova-cell0-db-create-mmkrb\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.433668 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kzg5\" (UniqueName: \"kubernetes.io/projected/689fbc67-9915-470c-a459-7f1787c26534-kube-api-access-8kzg5\") pod \"nova-api-1171-account-create-update-fgfss\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.496166 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrl8\" (UniqueName: \"kubernetes.io/projected/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-kube-api-access-ksrl8\") pod \"nova-cell1-db-create-c4bd4\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.496283 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-operator-scripts\") pod \"nova-cell1-db-create-c4bd4\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.497560 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-operator-scripts\") pod \"nova-cell1-db-create-c4bd4\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.508751 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.515755 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrl8\" (UniqueName: \"kubernetes.io/projected/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-kube-api-access-ksrl8\") pod \"nova-cell1-db-create-c4bd4\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.567731 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.615121 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1e84-account-create-update-nr95f"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.619552 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.621856 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.635137 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1e84-account-create-update-nr95f"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.636661 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.653720 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.701538 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhtwp\" (UniqueName: \"kubernetes.io/projected/7a140f8f-9be0-40ee-a952-2b2ef0d67031-kube-api-access-lhtwp\") pod \"nova-cell0-1e84-account-create-update-nr95f\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.701629 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a140f8f-9be0-40ee-a952-2b2ef0d67031-operator-scripts\") pod \"nova-cell0-1e84-account-create-update-nr95f\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.784347 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-efad-account-create-update-mccw2"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.787014 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.789977 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.814624 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhtwp\" (UniqueName: \"kubernetes.io/projected/7a140f8f-9be0-40ee-a952-2b2ef0d67031-kube-api-access-lhtwp\") pod \"nova-cell0-1e84-account-create-update-nr95f\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.814673 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgsmq\" (UniqueName: \"kubernetes.io/projected/fdf4c48c-91c4-4d85-938e-ec70263d466e-kube-api-access-rgsmq\") pod \"nova-cell1-efad-account-create-update-mccw2\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.814746 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a140f8f-9be0-40ee-a952-2b2ef0d67031-operator-scripts\") pod \"nova-cell0-1e84-account-create-update-nr95f\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.814850 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf4c48c-91c4-4d85-938e-ec70263d466e-operator-scripts\") pod \"nova-cell1-efad-account-create-update-mccw2\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.820762 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a140f8f-9be0-40ee-a952-2b2ef0d67031-operator-scripts\") pod \"nova-cell0-1e84-account-create-update-nr95f\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.824769 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-efad-account-create-update-mccw2"] Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.858421 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhtwp\" (UniqueName: \"kubernetes.io/projected/7a140f8f-9be0-40ee-a952-2b2ef0d67031-kube-api-access-lhtwp\") pod \"nova-cell0-1e84-account-create-update-nr95f\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.920349 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgsmq\" (UniqueName: \"kubernetes.io/projected/fdf4c48c-91c4-4d85-938e-ec70263d466e-kube-api-access-rgsmq\") pod \"nova-cell1-efad-account-create-update-mccw2\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.920529 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf4c48c-91c4-4d85-938e-ec70263d466e-operator-scripts\") pod \"nova-cell1-efad-account-create-update-mccw2\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.921496 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf4c48c-91c4-4d85-938e-ec70263d466e-operator-scripts\") pod \"nova-cell1-efad-account-create-update-mccw2\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.940707 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgsmq\" (UniqueName: \"kubernetes.io/projected/fdf4c48c-91c4-4d85-938e-ec70263d466e-kube-api-access-rgsmq\") pod \"nova-cell1-efad-account-create-update-mccw2\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:31 crc kubenswrapper[4817]: I0314 05:54:31.966577 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.127496 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.208265 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mtn8w"] Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.223978 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mmkrb"] Mar 14 05:54:32 crc kubenswrapper[4817]: W0314 05:54:32.260205 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb608e883_8038_424c_aa1a_d5a7b23ba0bf.slice/crio-9df30ffddc41be9f5a3db49dfd10b4cf517ce9ad681db31d3cf56d2138ab63df WatchSource:0}: Error finding container 9df30ffddc41be9f5a3db49dfd10b4cf517ce9ad681db31d3cf56d2138ab63df: Status 404 returned error can't find the container with id 9df30ffddc41be9f5a3db49dfd10b4cf517ce9ad681db31d3cf56d2138ab63df Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.738975 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c4bd4"] Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.890695 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtn8w" event={"ID":"b608e883-8038-424c-aa1a-d5a7b23ba0bf","Type":"ContainerStarted","Data":"9df30ffddc41be9f5a3db49dfd10b4cf517ce9ad681db31d3cf56d2138ab63df"} Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.890743 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1171-account-create-update-fgfss"] Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.890765 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerStarted","Data":"010d73566370393b892871b0356bfa788b2f7707f26d2ed49d4ad15c7df068c0"} Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.890782 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mmkrb" event={"ID":"3d12d202-daac-4eb2-a09a-5c3a63251b85","Type":"ContainerStarted","Data":"d87228309c4b18a198f67770de3a8b574e8c755056a27e04b569eeef9ceeab5e"} Mar 14 05:54:32 crc kubenswrapper[4817]: I0314 05:54:32.996272 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1e84-account-create-update-nr95f"] Mar 14 05:54:33 crc kubenswrapper[4817]: W0314 05:54:33.023604 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a140f8f_9be0_40ee_a952_2b2ef0d67031.slice/crio-cd90b22ef7e7be117b0277560fdd9d01f1b50ebd916e69a4b3b6afbd6b66e96f WatchSource:0}: Error finding container cd90b22ef7e7be117b0277560fdd9d01f1b50ebd916e69a4b3b6afbd6b66e96f: Status 404 returned error can't find the container with id cd90b22ef7e7be117b0277560fdd9d01f1b50ebd916e69a4b3b6afbd6b66e96f Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.058469 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-efad-account-create-update-mccw2"] Mar 14 05:54:33 crc kubenswrapper[4817]: W0314 05:54:33.078791 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdf4c48c_91c4_4d85_938e_ec70263d466e.slice/crio-949f21e3287b130c37e71f718b520bb43bbfb8ac06f699fe30ae30aa95a6bad4 WatchSource:0}: Error finding container 949f21e3287b130c37e71f718b520bb43bbfb8ac06f699fe30ae30aa95a6bad4: Status 404 returned error can't find the container with id 949f21e3287b130c37e71f718b520bb43bbfb8ac06f699fe30ae30aa95a6bad4 Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.868982 4817 generic.go:334] "Generic (PLEG): container finished" podID="689fbc67-9915-470c-a459-7f1787c26534" containerID="3c059d530572839117be6544d40119b5f20665db829591c8224714c628577580" exitCode=0 Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.869096 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1171-account-create-update-fgfss" event={"ID":"689fbc67-9915-470c-a459-7f1787c26534","Type":"ContainerDied","Data":"3c059d530572839117be6544d40119b5f20665db829591c8224714c628577580"} Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.869460 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1171-account-create-update-fgfss" event={"ID":"689fbc67-9915-470c-a459-7f1787c26534","Type":"ContainerStarted","Data":"9676f01d9e9c0ecb7242fb4c744dd55ca4debde4b40d7bcddfbb0004ec4ce531"} Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.872674 4817 generic.go:334] "Generic (PLEG): container finished" podID="3d12d202-daac-4eb2-a09a-5c3a63251b85" containerID="d509e85524eb80838eb94f1c5cdc7dd9266adda5cda277f61528a27d934195d5" exitCode=0 Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.872738 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mmkrb" event={"ID":"3d12d202-daac-4eb2-a09a-5c3a63251b85","Type":"ContainerDied","Data":"d509e85524eb80838eb94f1c5cdc7dd9266adda5cda277f61528a27d934195d5"} Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.874849 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-efad-account-create-update-mccw2" event={"ID":"fdf4c48c-91c4-4d85-938e-ec70263d466e","Type":"ContainerStarted","Data":"949f21e3287b130c37e71f718b520bb43bbfb8ac06f699fe30ae30aa95a6bad4"} Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.878127 4817 generic.go:334] "Generic (PLEG): container finished" podID="b608e883-8038-424c-aa1a-d5a7b23ba0bf" containerID="e7190b27429feb1422fb9099e30e1e455c9f1050911818f5548bc155ed7ea5e1" exitCode=0 Mar 14 05:54:33 crc kubenswrapper[4817]: I0314 05:54:33.878328 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtn8w" event={"ID":"b608e883-8038-424c-aa1a-d5a7b23ba0bf","Type":"ContainerDied","Data":"e7190b27429feb1422fb9099e30e1e455c9f1050911818f5548bc155ed7ea5e1"} Mar 14 05:54:34 crc kubenswrapper[4817]: I0314 05:54:34.191358 4817 generic.go:334] "Generic (PLEG): container finished" podID="fc6cb376-b4a6-449c-9155-4da3eaa5a94b" containerID="ebcf52beebdc4d6f7fa5d7647ece670753dd5613bb28788ff24cbfa0d42cedfb" exitCode=0 Mar 14 05:54:34 crc kubenswrapper[4817]: I0314 05:54:34.191459 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c4bd4" event={"ID":"fc6cb376-b4a6-449c-9155-4da3eaa5a94b","Type":"ContainerDied","Data":"ebcf52beebdc4d6f7fa5d7647ece670753dd5613bb28788ff24cbfa0d42cedfb"} Mar 14 05:54:34 crc kubenswrapper[4817]: I0314 05:54:34.191492 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c4bd4" event={"ID":"fc6cb376-b4a6-449c-9155-4da3eaa5a94b","Type":"ContainerStarted","Data":"9fd23b944fb73d82e8f0d3119af13cd1d20d44a8b49db1f3a2dba0101995cabf"} Mar 14 05:54:34 crc kubenswrapper[4817]: I0314 05:54:34.202225 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" event={"ID":"7a140f8f-9be0-40ee-a952-2b2ef0d67031","Type":"ContainerStarted","Data":"cd90b22ef7e7be117b0277560fdd9d01f1b50ebd916e69a4b3b6afbd6b66e96f"} Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.037849 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.222569 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerStarted","Data":"47000556cc52c769b82f6c6a73d111ac8f7ccc4ef6100a1659a2ee7e661e6b72"} Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.234854 4817 generic.go:334] "Generic (PLEG): container finished" podID="fdf4c48c-91c4-4d85-938e-ec70263d466e" containerID="d5ddaa27474aba842f32ff6cb098dcd2eef54bd56a290f34ea7c4736cc8d786e" exitCode=0 Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.235027 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-efad-account-create-update-mccw2" event={"ID":"fdf4c48c-91c4-4d85-938e-ec70263d466e","Type":"ContainerDied","Data":"d5ddaa27474aba842f32ff6cb098dcd2eef54bd56a290f34ea7c4736cc8d786e"} Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.238153 4817 generic.go:334] "Generic (PLEG): container finished" podID="7a140f8f-9be0-40ee-a952-2b2ef0d67031" containerID="bc80be5572b8024d32599245106517f8839b626aba33d9745d3e3d0fe0f9030e" exitCode=0 Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.238287 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" event={"ID":"7a140f8f-9be0-40ee-a952-2b2ef0d67031","Type":"ContainerDied","Data":"bc80be5572b8024d32599245106517f8839b626aba33d9745d3e3d0fe0f9030e"} Mar 14 05:54:35 crc kubenswrapper[4817]: I0314 05:54:35.901022 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.043863 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d12d202-daac-4eb2-a09a-5c3a63251b85-operator-scripts\") pod \"3d12d202-daac-4eb2-a09a-5c3a63251b85\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.044045 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9mzz\" (UniqueName: \"kubernetes.io/projected/3d12d202-daac-4eb2-a09a-5c3a63251b85-kube-api-access-r9mzz\") pod \"3d12d202-daac-4eb2-a09a-5c3a63251b85\" (UID: \"3d12d202-daac-4eb2-a09a-5c3a63251b85\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.047154 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d12d202-daac-4eb2-a09a-5c3a63251b85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d12d202-daac-4eb2-a09a-5c3a63251b85" (UID: "3d12d202-daac-4eb2-a09a-5c3a63251b85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.063964 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d12d202-daac-4eb2-a09a-5c3a63251b85-kube-api-access-r9mzz" (OuterVolumeSpecName: "kube-api-access-r9mzz") pod "3d12d202-daac-4eb2-a09a-5c3a63251b85" (UID: "3d12d202-daac-4eb2-a09a-5c3a63251b85"). InnerVolumeSpecName "kube-api-access-r9mzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.069430 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.117520 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.150435 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d12d202-daac-4eb2-a09a-5c3a63251b85-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.150486 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9mzz\" (UniqueName: \"kubernetes.io/projected/3d12d202-daac-4eb2-a09a-5c3a63251b85-kube-api-access-r9mzz\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.151040 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.252555 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56cz\" (UniqueName: \"kubernetes.io/projected/b608e883-8038-424c-aa1a-d5a7b23ba0bf-kube-api-access-m56cz\") pod \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.252612 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b608e883-8038-424c-aa1a-d5a7b23ba0bf-operator-scripts\") pod \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\" (UID: \"b608e883-8038-424c-aa1a-d5a7b23ba0bf\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.252953 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kzg5\" (UniqueName: \"kubernetes.io/projected/689fbc67-9915-470c-a459-7f1787c26534-kube-api-access-8kzg5\") pod \"689fbc67-9915-470c-a459-7f1787c26534\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.252984 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-operator-scripts\") pod \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.253114 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksrl8\" (UniqueName: \"kubernetes.io/projected/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-kube-api-access-ksrl8\") pod \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\" (UID: \"fc6cb376-b4a6-449c-9155-4da3eaa5a94b\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.253197 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689fbc67-9915-470c-a459-7f1787c26534-operator-scripts\") pod \"689fbc67-9915-470c-a459-7f1787c26534\" (UID: \"689fbc67-9915-470c-a459-7f1787c26534\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.254040 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/689fbc67-9915-470c-a459-7f1787c26534-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "689fbc67-9915-470c-a459-7f1787c26534" (UID: "689fbc67-9915-470c-a459-7f1787c26534"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.256348 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b608e883-8038-424c-aa1a-d5a7b23ba0bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b608e883-8038-424c-aa1a-d5a7b23ba0bf" (UID: "b608e883-8038-424c-aa1a-d5a7b23ba0bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.257255 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc6cb376-b4a6-449c-9155-4da3eaa5a94b" (UID: "fc6cb376-b4a6-449c-9155-4da3eaa5a94b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.260356 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b608e883-8038-424c-aa1a-d5a7b23ba0bf-kube-api-access-m56cz" (OuterVolumeSpecName: "kube-api-access-m56cz") pod "b608e883-8038-424c-aa1a-d5a7b23ba0bf" (UID: "b608e883-8038-424c-aa1a-d5a7b23ba0bf"). InnerVolumeSpecName "kube-api-access-m56cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.282258 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/689fbc67-9915-470c-a459-7f1787c26534-kube-api-access-8kzg5" (OuterVolumeSpecName: "kube-api-access-8kzg5") pod "689fbc67-9915-470c-a459-7f1787c26534" (UID: "689fbc67-9915-470c-a459-7f1787c26534"). InnerVolumeSpecName "kube-api-access-8kzg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.285199 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-kube-api-access-ksrl8" (OuterVolumeSpecName: "kube-api-access-ksrl8") pod "fc6cb376-b4a6-449c-9155-4da3eaa5a94b" (UID: "fc6cb376-b4a6-449c-9155-4da3eaa5a94b"). InnerVolumeSpecName "kube-api-access-ksrl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.293123 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mmkrb" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.295010 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mmkrb" event={"ID":"3d12d202-daac-4eb2-a09a-5c3a63251b85","Type":"ContainerDied","Data":"d87228309c4b18a198f67770de3a8b574e8c755056a27e04b569eeef9ceeab5e"} Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.295062 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87228309c4b18a198f67770de3a8b574e8c755056a27e04b569eeef9ceeab5e" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.298656 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mtn8w" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.300360 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mtn8w" event={"ID":"b608e883-8038-424c-aa1a-d5a7b23ba0bf","Type":"ContainerDied","Data":"9df30ffddc41be9f5a3db49dfd10b4cf517ce9ad681db31d3cf56d2138ab63df"} Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.300407 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df30ffddc41be9f5a3db49dfd10b4cf517ce9ad681db31d3cf56d2138ab63df" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.307115 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c4bd4" event={"ID":"fc6cb376-b4a6-449c-9155-4da3eaa5a94b","Type":"ContainerDied","Data":"9fd23b944fb73d82e8f0d3119af13cd1d20d44a8b49db1f3a2dba0101995cabf"} Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.307165 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd23b944fb73d82e8f0d3119af13cd1d20d44a8b49db1f3a2dba0101995cabf" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.307178 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c4bd4" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.311780 4817 generic.go:334] "Generic (PLEG): container finished" podID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerID="6813fa994a03d06f9c90e9d2976f859fb8358f6e42e61538c91037d211e9b2f3" exitCode=0 Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.311928 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cffdb9bc8-nfp5r" event={"ID":"8d851269-5dc1-486c-9914-f8de2088dfc5","Type":"ContainerDied","Data":"6813fa994a03d06f9c90e9d2976f859fb8358f6e42e61538c91037d211e9b2f3"} Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.320606 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1171-account-create-update-fgfss" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.321590 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1171-account-create-update-fgfss" event={"ID":"689fbc67-9915-470c-a459-7f1787c26534","Type":"ContainerDied","Data":"9676f01d9e9c0ecb7242fb4c744dd55ca4debde4b40d7bcddfbb0004ec4ce531"} Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.321674 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9676f01d9e9c0ecb7242fb4c744dd55ca4debde4b40d7bcddfbb0004ec4ce531" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.356686 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689fbc67-9915-470c-a459-7f1787c26534-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.356782 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56cz\" (UniqueName: \"kubernetes.io/projected/b608e883-8038-424c-aa1a-d5a7b23ba0bf-kube-api-access-m56cz\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.356794 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b608e883-8038-424c-aa1a-d5a7b23ba0bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.356805 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kzg5\" (UniqueName: \"kubernetes.io/projected/689fbc67-9915-470c-a459-7f1787c26534-kube-api-access-8kzg5\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.356814 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.356826 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksrl8\" (UniqueName: \"kubernetes.io/projected/fc6cb376-b4a6-449c-9155-4da3eaa5a94b-kube-api-access-ksrl8\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.395837 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.565966 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-config\") pod \"8d851269-5dc1-486c-9914-f8de2088dfc5\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.566162 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-httpd-config\") pod \"8d851269-5dc1-486c-9914-f8de2088dfc5\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.566225 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-ovndb-tls-certs\") pod \"8d851269-5dc1-486c-9914-f8de2088dfc5\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.566311 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c4ww\" (UniqueName: \"kubernetes.io/projected/8d851269-5dc1-486c-9914-f8de2088dfc5-kube-api-access-4c4ww\") pod \"8d851269-5dc1-486c-9914-f8de2088dfc5\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.566366 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-combined-ca-bundle\") pod \"8d851269-5dc1-486c-9914-f8de2088dfc5\" (UID: \"8d851269-5dc1-486c-9914-f8de2088dfc5\") " Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.571503 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8d851269-5dc1-486c-9914-f8de2088dfc5" (UID: "8d851269-5dc1-486c-9914-f8de2088dfc5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.572527 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d851269-5dc1-486c-9914-f8de2088dfc5-kube-api-access-4c4ww" (OuterVolumeSpecName: "kube-api-access-4c4ww") pod "8d851269-5dc1-486c-9914-f8de2088dfc5" (UID: "8d851269-5dc1-486c-9914-f8de2088dfc5"). InnerVolumeSpecName "kube-api-access-4c4ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.638712 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-config" (OuterVolumeSpecName: "config") pod "8d851269-5dc1-486c-9914-f8de2088dfc5" (UID: "8d851269-5dc1-486c-9914-f8de2088dfc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.666757 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d851269-5dc1-486c-9914-f8de2088dfc5" (UID: "8d851269-5dc1-486c-9914-f8de2088dfc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.668770 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c4ww\" (UniqueName: \"kubernetes.io/projected/8d851269-5dc1-486c-9914-f8de2088dfc5-kube-api-access-4c4ww\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.668822 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.668840 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.668853 4817 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.794189 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8d851269-5dc1-486c-9914-f8de2088dfc5" (UID: "8d851269-5dc1-486c-9914-f8de2088dfc5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.876853 4817 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d851269-5dc1-486c-9914-f8de2088dfc5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.931226 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:36 crc kubenswrapper[4817]: I0314 05:54:36.935859 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.080761 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf4c48c-91c4-4d85-938e-ec70263d466e-operator-scripts\") pod \"fdf4c48c-91c4-4d85-938e-ec70263d466e\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.080965 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a140f8f-9be0-40ee-a952-2b2ef0d67031-operator-scripts\") pod \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.081009 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgsmq\" (UniqueName: \"kubernetes.io/projected/fdf4c48c-91c4-4d85-938e-ec70263d466e-kube-api-access-rgsmq\") pod \"fdf4c48c-91c4-4d85-938e-ec70263d466e\" (UID: \"fdf4c48c-91c4-4d85-938e-ec70263d466e\") " Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.081193 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhtwp\" (UniqueName: \"kubernetes.io/projected/7a140f8f-9be0-40ee-a952-2b2ef0d67031-kube-api-access-lhtwp\") pod \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\" (UID: \"7a140f8f-9be0-40ee-a952-2b2ef0d67031\") " Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.085866 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf4c48c-91c4-4d85-938e-ec70263d466e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdf4c48c-91c4-4d85-938e-ec70263d466e" (UID: "fdf4c48c-91c4-4d85-938e-ec70263d466e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.097669 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a140f8f-9be0-40ee-a952-2b2ef0d67031-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a140f8f-9be0-40ee-a952-2b2ef0d67031" (UID: "7a140f8f-9be0-40ee-a952-2b2ef0d67031"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.097709 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a140f8f-9be0-40ee-a952-2b2ef0d67031-kube-api-access-lhtwp" (OuterVolumeSpecName: "kube-api-access-lhtwp") pod "7a140f8f-9be0-40ee-a952-2b2ef0d67031" (UID: "7a140f8f-9be0-40ee-a952-2b2ef0d67031"). InnerVolumeSpecName "kube-api-access-lhtwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.126481 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf4c48c-91c4-4d85-938e-ec70263d466e-kube-api-access-rgsmq" (OuterVolumeSpecName: "kube-api-access-rgsmq") pod "fdf4c48c-91c4-4d85-938e-ec70263d466e" (UID: "fdf4c48c-91c4-4d85-938e-ec70263d466e"). InnerVolumeSpecName "kube-api-access-rgsmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.186352 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhtwp\" (UniqueName: \"kubernetes.io/projected/7a140f8f-9be0-40ee-a952-2b2ef0d67031-kube-api-access-lhtwp\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.186405 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf4c48c-91c4-4d85-938e-ec70263d466e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.186421 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a140f8f-9be0-40ee-a952-2b2ef0d67031-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.186432 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgsmq\" (UniqueName: \"kubernetes.io/projected/fdf4c48c-91c4-4d85-938e-ec70263d466e-kube-api-access-rgsmq\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.341119 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" event={"ID":"7a140f8f-9be0-40ee-a952-2b2ef0d67031","Type":"ContainerDied","Data":"cd90b22ef7e7be117b0277560fdd9d01f1b50ebd916e69a4b3b6afbd6b66e96f"} Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.341190 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd90b22ef7e7be117b0277560fdd9d01f1b50ebd916e69a4b3b6afbd6b66e96f" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.341279 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1e84-account-create-update-nr95f" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.358488 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cffdb9bc8-nfp5r" event={"ID":"8d851269-5dc1-486c-9914-f8de2088dfc5","Type":"ContainerDied","Data":"e24fa9206c713ad0666df6549d1772b72004b0acf4f9e9cf12dfadcea37894d4"} Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.358551 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cffdb9bc8-nfp5r" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.358657 4817 scope.go:117] "RemoveContainer" containerID="2ccd3ec57d75d11d5b9b809695c8cad452f5db007222cf9faad62d5f4b827fdc" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.379598 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerStarted","Data":"f981dbfddba9394372f0fa553791e6904a3393ef1ab183a838d7f1e37574f87d"} Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.380636 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-central-agent" containerID="cri-o://425a93adf2cae2f5fc95e4b294a89dbc45c80f913c7abeba1bd462c75b9c81b1" gracePeriod=30 Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.381516 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.381589 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="proxy-httpd" containerID="cri-o://f981dbfddba9394372f0fa553791e6904a3393ef1ab183a838d7f1e37574f87d" gracePeriod=30 Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.381691 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="sg-core" containerID="cri-o://47000556cc52c769b82f6c6a73d111ac8f7ccc4ef6100a1659a2ee7e661e6b72" gracePeriod=30 Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.381778 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-notification-agent" containerID="cri-o://010d73566370393b892871b0356bfa788b2f7707f26d2ed49d4ad15c7df068c0" gracePeriod=30 Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.415373 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-efad-account-create-update-mccw2" event={"ID":"fdf4c48c-91c4-4d85-938e-ec70263d466e","Type":"ContainerDied","Data":"949f21e3287b130c37e71f718b520bb43bbfb8ac06f699fe30ae30aa95a6bad4"} Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.415420 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949f21e3287b130c37e71f718b520bb43bbfb8ac06f699fe30ae30aa95a6bad4" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.415667 4817 scope.go:117] "RemoveContainer" containerID="6813fa994a03d06f9c90e9d2976f859fb8358f6e42e61538c91037d211e9b2f3" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.419283 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-efad-account-create-update-mccw2" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.472080 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.988422867 podStartE2EDuration="12.460930793s" podCreationTimestamp="2026-03-14 05:54:25 +0000 UTC" firstStartedPulling="2026-03-14 05:54:26.575731488 +0000 UTC m=+1320.613992234" lastFinishedPulling="2026-03-14 05:54:36.048239414 +0000 UTC m=+1330.086500160" observedRunningTime="2026-03-14 05:54:37.439148098 +0000 UTC m=+1331.477408854" watchObservedRunningTime="2026-03-14 05:54:37.460930793 +0000 UTC m=+1331.499191539" Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.498967 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cffdb9bc8-nfp5r"] Mar 14 05:54:37 crc kubenswrapper[4817]: I0314 05:54:37.519737 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cffdb9bc8-nfp5r"] Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.425578 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430801 4817 generic.go:334] "Generic (PLEG): container finished" podID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerID="f981dbfddba9394372f0fa553791e6904a3393ef1ab183a838d7f1e37574f87d" exitCode=0 Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430841 4817 generic.go:334] "Generic (PLEG): container finished" podID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerID="47000556cc52c769b82f6c6a73d111ac8f7ccc4ef6100a1659a2ee7e661e6b72" exitCode=2 Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430852 4817 generic.go:334] "Generic (PLEG): container finished" podID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerID="010d73566370393b892871b0356bfa788b2f7707f26d2ed49d4ad15c7df068c0" exitCode=0 Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430862 4817 generic.go:334] "Generic (PLEG): container finished" podID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerID="425a93adf2cae2f5fc95e4b294a89dbc45c80f913c7abeba1bd462c75b9c81b1" exitCode=0 Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430934 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerDied","Data":"f981dbfddba9394372f0fa553791e6904a3393ef1ab183a838d7f1e37574f87d"} Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430978 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerDied","Data":"47000556cc52c769b82f6c6a73d111ac8f7ccc4ef6100a1659a2ee7e661e6b72"} Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.430991 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerDied","Data":"010d73566370393b892871b0356bfa788b2f7707f26d2ed49d4ad15c7df068c0"} Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.431003 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerDied","Data":"425a93adf2cae2f5fc95e4b294a89dbc45c80f913c7abeba1bd462c75b9c81b1"} Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.745119 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" path="/var/lib/kubelet/pods/8d851269-5dc1-486c-9914-f8de2088dfc5/volumes" Mar 14 05:54:38 crc kubenswrapper[4817]: I0314 05:54:38.893590 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.036773 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-config-data\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.036864 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-combined-ca-bundle\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.037123 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-log-httpd\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.037156 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-run-httpd\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.037178 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-sg-core-conf-yaml\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.037200 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-ceilometer-tls-certs\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.037297 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-scripts\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.037354 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kn76\" (UniqueName: \"kubernetes.io/projected/26c2a04c-15c9-4ad3-8e16-274ae01138a1-kube-api-access-8kn76\") pod \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\" (UID: \"26c2a04c-15c9-4ad3-8e16-274ae01138a1\") " Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.039291 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.039647 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.046220 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-scripts" (OuterVolumeSpecName: "scripts") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.086670 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c2a04c-15c9-4ad3-8e16-274ae01138a1-kube-api-access-8kn76" (OuterVolumeSpecName: "kube-api-access-8kn76") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "kube-api-access-8kn76". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.090211 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.118687 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.139526 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.139572 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26c2a04c-15c9-4ad3-8e16-274ae01138a1-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.139582 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.139593 4817 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.139606 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.139619 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kn76\" (UniqueName: \"kubernetes.io/projected/26c2a04c-15c9-4ad3-8e16-274ae01138a1-kube-api-access-8kn76\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.141665 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.154401 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-config-data" (OuterVolumeSpecName: "config-data") pod "26c2a04c-15c9-4ad3-8e16-274ae01138a1" (UID: "26c2a04c-15c9-4ad3-8e16-274ae01138a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.241883 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.241935 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2a04c-15c9-4ad3-8e16-274ae01138a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.448485 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26c2a04c-15c9-4ad3-8e16-274ae01138a1","Type":"ContainerDied","Data":"58eaef97409b6376e9d93f6b577daa20e6f94e2f0f2cb91d66e7bb270d715ac2"} Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.449013 4817 scope.go:117] "RemoveContainer" containerID="f981dbfddba9394372f0fa553791e6904a3393ef1ab183a838d7f1e37574f87d" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.448558 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.473291 4817 scope.go:117] "RemoveContainer" containerID="47000556cc52c769b82f6c6a73d111ac8f7ccc4ef6100a1659a2ee7e661e6b72" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.492025 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.506383 4817 scope.go:117] "RemoveContainer" containerID="010d73566370393b892871b0356bfa788b2f7707f26d2ed49d4ad15c7df068c0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.525213 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534067 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534625 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6cb376-b4a6-449c-9155-4da3eaa5a94b" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534652 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6cb376-b4a6-449c-9155-4da3eaa5a94b" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534671 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a140f8f-9be0-40ee-a952-2b2ef0d67031" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534681 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a140f8f-9be0-40ee-a952-2b2ef0d67031" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534697 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="689fbc67-9915-470c-a459-7f1787c26534" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534708 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="689fbc67-9915-470c-a459-7f1787c26534" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534720 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d12d202-daac-4eb2-a09a-5c3a63251b85" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534727 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d12d202-daac-4eb2-a09a-5c3a63251b85" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534741 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-api" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534748 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-api" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534764 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="sg-core" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534772 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="sg-core" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534788 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-central-agent" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534796 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-central-agent" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534816 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf4c48c-91c4-4d85-938e-ec70263d466e" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534824 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf4c48c-91c4-4d85-938e-ec70263d466e" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534835 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-httpd" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534842 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-httpd" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534857 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-notification-agent" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534865 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-notification-agent" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.534877 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="proxy-httpd" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.534886 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="proxy-httpd" Mar 14 05:54:39 crc kubenswrapper[4817]: E0314 05:54:39.535231 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b608e883-8038-424c-aa1a-d5a7b23ba0bf" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535245 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b608e883-8038-424c-aa1a-d5a7b23ba0bf" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535477 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf4c48c-91c4-4d85-938e-ec70263d466e" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535511 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d12d202-daac-4eb2-a09a-5c3a63251b85" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535523 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a140f8f-9be0-40ee-a952-2b2ef0d67031" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535536 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="sg-core" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535549 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-notification-agent" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535561 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="ceilometer-central-agent" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535577 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b608e883-8038-424c-aa1a-d5a7b23ba0bf" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535590 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-httpd" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535601 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6cb376-b4a6-449c-9155-4da3eaa5a94b" containerName="mariadb-database-create" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535609 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="689fbc67-9915-470c-a459-7f1787c26534" containerName="mariadb-account-create-update" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535616 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" containerName="proxy-httpd" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.535629 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d851269-5dc1-486c-9914-f8de2088dfc5" containerName="neutron-api" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.537856 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.545016 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.545153 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.545349 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.558137 4817 scope.go:117] "RemoveContainer" containerID="425a93adf2cae2f5fc95e4b294a89dbc45c80f913c7abeba1bd462c75b9c81b1" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.559550 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.561884 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-scripts\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568127 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-log-httpd\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568273 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568421 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568478 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-run-httpd\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568677 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568836 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-config-data\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.568963 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxmfh\" (UniqueName: \"kubernetes.io/projected/f681b43f-cdae-4961-8e35-96b02d6d8ee9-kube-api-access-qxmfh\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.671475 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.671609 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-config-data\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.671672 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxmfh\" (UniqueName: \"kubernetes.io/projected/f681b43f-cdae-4961-8e35-96b02d6d8ee9-kube-api-access-qxmfh\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.671728 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-scripts\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.671779 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-log-httpd\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.671828 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.673071 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-log-httpd\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.673293 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.673778 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-run-httpd\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.674344 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-run-httpd\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.676087 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.677012 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-scripts\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.680301 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.680490 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-config-data\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.680566 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.698090 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxmfh\" (UniqueName: \"kubernetes.io/projected/f681b43f-cdae-4961-8e35-96b02d6d8ee9-kube-api-access-qxmfh\") pod \"ceilometer-0\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " pod="openstack/ceilometer-0" Mar 14 05:54:39 crc kubenswrapper[4817]: I0314 05:54:39.890936 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:54:40 crc kubenswrapper[4817]: I0314 05:54:40.296460 4817 scope.go:117] "RemoveContainer" containerID="85a7e082379adfde08a802777abf483d0e78c51947ecb3fde7c6b94720c41653" Mar 14 05:54:40 crc kubenswrapper[4817]: I0314 05:54:40.573728 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:40 crc kubenswrapper[4817]: W0314 05:54:40.589935 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf681b43f_cdae_4961_8e35_96b02d6d8ee9.slice/crio-30a5a112692a573356bf12ba91b10bbac4c491118450bea66cf9e8e314003ddd WatchSource:0}: Error finding container 30a5a112692a573356bf12ba91b10bbac4c491118450bea66cf9e8e314003ddd: Status 404 returned error can't find the container with id 30a5a112692a573356bf12ba91b10bbac4c491118450bea66cf9e8e314003ddd Mar 14 05:54:40 crc kubenswrapper[4817]: I0314 05:54:40.744604 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c2a04c-15c9-4ad3-8e16-274ae01138a1" path="/var/lib/kubelet/pods/26c2a04c-15c9-4ad3-8e16-274ae01138a1/volumes" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.470153 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerStarted","Data":"9a038b8186ece307daf6b784824fca7d6c04b32d3ecb8d897ce1f0374064e16a"} Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.470636 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerStarted","Data":"30a5a112692a573356bf12ba91b10bbac4c491118450bea66cf9e8e314003ddd"} Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.787899 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hkrfb"] Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.789408 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.791764 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.792491 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.792652 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-svjhn" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.805320 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hkrfb"] Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.849294 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.849374 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-scripts\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.849407 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cg9b\" (UniqueName: \"kubernetes.io/projected/0db362f9-2d12-48c2-b94c-e1406a811e1e-kube-api-access-7cg9b\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.849454 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-config-data\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.951950 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.952041 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-scripts\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.952069 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cg9b\" (UniqueName: \"kubernetes.io/projected/0db362f9-2d12-48c2-b94c-e1406a811e1e-kube-api-access-7cg9b\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.952108 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-config-data\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.958755 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-config-data\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.959053 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.961088 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-scripts\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:41 crc kubenswrapper[4817]: I0314 05:54:41.971785 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cg9b\" (UniqueName: \"kubernetes.io/projected/0db362f9-2d12-48c2-b94c-e1406a811e1e-kube-api-access-7cg9b\") pod \"nova-cell0-conductor-db-sync-hkrfb\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:42 crc kubenswrapper[4817]: I0314 05:54:42.112044 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:54:42 crc kubenswrapper[4817]: I0314 05:54:42.486296 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerStarted","Data":"58b5477cfeb6f339d5aff2cf4b0448136807c172cd5fe99e9dfb228463b4315c"} Mar 14 05:54:42 crc kubenswrapper[4817]: I0314 05:54:42.697426 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hkrfb"] Mar 14 05:54:43 crc kubenswrapper[4817]: I0314 05:54:43.516168 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerStarted","Data":"999aa2550a2bed5ba0462e84cc94063c7e6ce1fa71f1df5272e44d70c3eb576a"} Mar 14 05:54:43 crc kubenswrapper[4817]: I0314 05:54:43.519117 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" event={"ID":"0db362f9-2d12-48c2-b94c-e1406a811e1e","Type":"ContainerStarted","Data":"68d7ae74b99641b15cdd3c71d46bac2d0a89ca18b3eea6b875122e0445f58a69"} Mar 14 05:54:45 crc kubenswrapper[4817]: I0314 05:54:45.556024 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerStarted","Data":"69560bd04354782708f98dc2d51ad37dc233c8d2e3269d708ea93db643c5df63"} Mar 14 05:54:45 crc kubenswrapper[4817]: I0314 05:54:45.557023 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:54:46 crc kubenswrapper[4817]: I0314 05:54:46.763793 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.238322007 podStartE2EDuration="7.763765957s" podCreationTimestamp="2026-03-14 05:54:39 +0000 UTC" firstStartedPulling="2026-03-14 05:54:40.600559465 +0000 UTC m=+1334.638820211" lastFinishedPulling="2026-03-14 05:54:45.126003415 +0000 UTC m=+1339.164264161" observedRunningTime="2026-03-14 05:54:45.589959629 +0000 UTC m=+1339.628220395" watchObservedRunningTime="2026-03-14 05:54:46.763765957 +0000 UTC m=+1340.802026713" Mar 14 05:54:50 crc kubenswrapper[4817]: I0314 05:54:50.917879 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:54:50 crc kubenswrapper[4817]: I0314 05:54:50.918736 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-central-agent" containerID="cri-o://9a038b8186ece307daf6b784824fca7d6c04b32d3ecb8d897ce1f0374064e16a" gracePeriod=30 Mar 14 05:54:50 crc kubenswrapper[4817]: I0314 05:54:50.918818 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="sg-core" containerID="cri-o://999aa2550a2bed5ba0462e84cc94063c7e6ce1fa71f1df5272e44d70c3eb576a" gracePeriod=30 Mar 14 05:54:50 crc kubenswrapper[4817]: I0314 05:54:50.918845 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="proxy-httpd" containerID="cri-o://69560bd04354782708f98dc2d51ad37dc233c8d2e3269d708ea93db643c5df63" gracePeriod=30 Mar 14 05:54:50 crc kubenswrapper[4817]: I0314 05:54:50.918857 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-notification-agent" containerID="cri-o://58b5477cfeb6f339d5aff2cf4b0448136807c172cd5fe99e9dfb228463b4315c" gracePeriod=30 Mar 14 05:54:51 crc kubenswrapper[4817]: I0314 05:54:51.623636 4817 generic.go:334] "Generic (PLEG): container finished" podID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerID="69560bd04354782708f98dc2d51ad37dc233c8d2e3269d708ea93db643c5df63" exitCode=0 Mar 14 05:54:51 crc kubenswrapper[4817]: I0314 05:54:51.624207 4817 generic.go:334] "Generic (PLEG): container finished" podID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerID="999aa2550a2bed5ba0462e84cc94063c7e6ce1fa71f1df5272e44d70c3eb576a" exitCode=2 Mar 14 05:54:51 crc kubenswrapper[4817]: I0314 05:54:51.623724 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerDied","Data":"69560bd04354782708f98dc2d51ad37dc233c8d2e3269d708ea93db643c5df63"} Mar 14 05:54:51 crc kubenswrapper[4817]: I0314 05:54:51.624291 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerDied","Data":"999aa2550a2bed5ba0462e84cc94063c7e6ce1fa71f1df5272e44d70c3eb576a"} Mar 14 05:54:52 crc kubenswrapper[4817]: I0314 05:54:52.635374 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerDied","Data":"9a038b8186ece307daf6b784824fca7d6c04b32d3ecb8d897ce1f0374064e16a"} Mar 14 05:54:52 crc kubenswrapper[4817]: I0314 05:54:52.635321 4817 generic.go:334] "Generic (PLEG): container finished" podID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerID="9a038b8186ece307daf6b784824fca7d6c04b32d3ecb8d897ce1f0374064e16a" exitCode=0 Mar 14 05:55:07 crc kubenswrapper[4817]: I0314 05:55:07.558182 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="2a37bd39-17a6-4c93-8146-b694d6e30b37" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:55:09 crc kubenswrapper[4817]: E0314 05:55:09.484136 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Mar 14 05:55:09 crc kubenswrapper[4817]: E0314 05:55:09.484918 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-hkrfb_openstack(0db362f9-2d12-48c2-b94c-e1406a811e1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 05:55:09 crc kubenswrapper[4817]: E0314 05:55:09.486878 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" podUID="0db362f9-2d12-48c2-b94c-e1406a811e1e" Mar 14 05:55:09 crc kubenswrapper[4817]: E0314 05:55:09.824999 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" podUID="0db362f9-2d12-48c2-b94c-e1406a811e1e" Mar 14 05:55:09 crc kubenswrapper[4817]: I0314 05:55:09.894943 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.169:3000/\": dial tcp 10.217.0.169:3000: connect: connection refused" Mar 14 05:55:10 crc kubenswrapper[4817]: I0314 05:55:10.828344 4817 generic.go:334] "Generic (PLEG): container finished" podID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerID="58b5477cfeb6f339d5aff2cf4b0448136807c172cd5fe99e9dfb228463b4315c" exitCode=0 Mar 14 05:55:10 crc kubenswrapper[4817]: I0314 05:55:10.828543 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerDied","Data":"58b5477cfeb6f339d5aff2cf4b0448136807c172cd5fe99e9dfb228463b4315c"} Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.617378 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788250 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-ceilometer-tls-certs\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788355 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-run-httpd\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788402 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-config-data\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788457 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-sg-core-conf-yaml\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788501 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-log-httpd\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788541 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-combined-ca-bundle\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788577 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxmfh\" (UniqueName: \"kubernetes.io/projected/f681b43f-cdae-4961-8e35-96b02d6d8ee9-kube-api-access-qxmfh\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.788631 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-scripts\") pod \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\" (UID: \"f681b43f-cdae-4961-8e35-96b02d6d8ee9\") " Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.789041 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.789131 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.789139 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.795515 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-scripts" (OuterVolumeSpecName: "scripts") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.804009 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f681b43f-cdae-4961-8e35-96b02d6d8ee9-kube-api-access-qxmfh" (OuterVolumeSpecName: "kube-api-access-qxmfh") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "kube-api-access-qxmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.820833 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.841904 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f681b43f-cdae-4961-8e35-96b02d6d8ee9","Type":"ContainerDied","Data":"30a5a112692a573356bf12ba91b10bbac4c491118450bea66cf9e8e314003ddd"} Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.841966 4817 scope.go:117] "RemoveContainer" containerID="69560bd04354782708f98dc2d51ad37dc233c8d2e3269d708ea93db643c5df63" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.842104 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.844794 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.870594 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.891296 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.891338 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxmfh\" (UniqueName: \"kubernetes.io/projected/f681b43f-cdae-4961-8e35-96b02d6d8ee9-kube-api-access-qxmfh\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.891355 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.891366 4817 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.891378 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f681b43f-cdae-4961-8e35-96b02d6d8ee9-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.891390 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.922830 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-config-data" (OuterVolumeSpecName: "config-data") pod "f681b43f-cdae-4961-8e35-96b02d6d8ee9" (UID: "f681b43f-cdae-4961-8e35-96b02d6d8ee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.982464 4817 scope.go:117] "RemoveContainer" containerID="999aa2550a2bed5ba0462e84cc94063c7e6ce1fa71f1df5272e44d70c3eb576a" Mar 14 05:55:11 crc kubenswrapper[4817]: I0314 05:55:11.995874 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f681b43f-cdae-4961-8e35-96b02d6d8ee9-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.006404 4817 scope.go:117] "RemoveContainer" containerID="58b5477cfeb6f339d5aff2cf4b0448136807c172cd5fe99e9dfb228463b4315c" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.035771 4817 scope.go:117] "RemoveContainer" containerID="9a038b8186ece307daf6b784824fca7d6c04b32d3ecb8d897ce1f0374064e16a" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.181153 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.192953 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.218267 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:12 crc kubenswrapper[4817]: E0314 05:55:12.218724 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="proxy-httpd" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.218741 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="proxy-httpd" Mar 14 05:55:12 crc kubenswrapper[4817]: E0314 05:55:12.218756 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-central-agent" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.218764 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-central-agent" Mar 14 05:55:12 crc kubenswrapper[4817]: E0314 05:55:12.218773 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="sg-core" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.218780 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="sg-core" Mar 14 05:55:12 crc kubenswrapper[4817]: E0314 05:55:12.218795 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-notification-agent" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.218801 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-notification-agent" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.219001 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-central-agent" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.219024 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="ceilometer-notification-agent" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.219034 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="proxy-httpd" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.219045 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" containerName="sg-core" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.220666 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.225937 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.226363 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.226492 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.236798 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.404461 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.404523 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2km6d\" (UniqueName: \"kubernetes.io/projected/ab612687-28ad-42f2-b391-dd959596ef34-kube-api-access-2km6d\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.404604 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-run-httpd\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.404685 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-log-httpd\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.405032 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-config-data\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.405386 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.405481 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.405673 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-scripts\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.507711 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-run-httpd\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508267 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-log-httpd\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508309 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-config-data\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508338 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508355 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-run-httpd\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508383 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508476 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-scripts\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508600 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.508641 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2km6d\" (UniqueName: \"kubernetes.io/projected/ab612687-28ad-42f2-b391-dd959596ef34-kube-api-access-2km6d\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.509482 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-log-httpd\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.515599 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-config-data\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.518601 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.518839 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-scripts\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.519266 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.524809 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.528345 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2km6d\" (UniqueName: \"kubernetes.io/projected/ab612687-28ad-42f2-b391-dd959596ef34-kube-api-access-2km6d\") pod \"ceilometer-0\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.594222 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:12 crc kubenswrapper[4817]: I0314 05:55:12.756244 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f681b43f-cdae-4961-8e35-96b02d6d8ee9" path="/var/lib/kubelet/pods/f681b43f-cdae-4961-8e35-96b02d6d8ee9/volumes" Mar 14 05:55:13 crc kubenswrapper[4817]: I0314 05:55:13.106452 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:13 crc kubenswrapper[4817]: I0314 05:55:13.398345 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:13 crc kubenswrapper[4817]: I0314 05:55:13.862196 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerStarted","Data":"5b91ceda1a17551bcae387c41305d09d964418fae74ec4af8961496786d51d64"} Mar 14 05:55:14 crc kubenswrapper[4817]: I0314 05:55:14.875231 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerStarted","Data":"bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378"} Mar 14 05:55:15 crc kubenswrapper[4817]: I0314 05:55:15.893964 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerStarted","Data":"beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8"} Mar 14 05:55:16 crc kubenswrapper[4817]: I0314 05:55:16.906671 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerStarted","Data":"770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e"} Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.948728 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerStarted","Data":"fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01"} Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.949349 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.949045 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="sg-core" containerID="cri-o://770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e" gracePeriod=30 Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.949012 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="proxy-httpd" containerID="cri-o://fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01" gracePeriod=30 Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.949010 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-central-agent" containerID="cri-o://bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378" gracePeriod=30 Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.949067 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-notification-agent" containerID="cri-o://beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8" gracePeriod=30 Mar 14 05:55:19 crc kubenswrapper[4817]: I0314 05:55:19.976878 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.221942447 podStartE2EDuration="7.976855863s" podCreationTimestamp="2026-03-14 05:55:12 +0000 UTC" firstStartedPulling="2026-03-14 05:55:13.114873794 +0000 UTC m=+1367.153134540" lastFinishedPulling="2026-03-14 05:55:18.86978722 +0000 UTC m=+1372.908047956" observedRunningTime="2026-03-14 05:55:19.973776444 +0000 UTC m=+1374.012037190" watchObservedRunningTime="2026-03-14 05:55:19.976855863 +0000 UTC m=+1374.015116609" Mar 14 05:55:20 crc kubenswrapper[4817]: I0314 05:55:20.962782 4817 generic.go:334] "Generic (PLEG): container finished" podID="ab612687-28ad-42f2-b391-dd959596ef34" containerID="fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01" exitCode=0 Mar 14 05:55:20 crc kubenswrapper[4817]: I0314 05:55:20.964240 4817 generic.go:334] "Generic (PLEG): container finished" podID="ab612687-28ad-42f2-b391-dd959596ef34" containerID="770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e" exitCode=2 Mar 14 05:55:20 crc kubenswrapper[4817]: I0314 05:55:20.964267 4817 generic.go:334] "Generic (PLEG): container finished" podID="ab612687-28ad-42f2-b391-dd959596ef34" containerID="beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8" exitCode=0 Mar 14 05:55:20 crc kubenswrapper[4817]: I0314 05:55:20.962918 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerDied","Data":"fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01"} Mar 14 05:55:20 crc kubenswrapper[4817]: I0314 05:55:20.964335 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerDied","Data":"770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e"} Mar 14 05:55:20 crc kubenswrapper[4817]: I0314 05:55:20.964365 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerDied","Data":"beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8"} Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.772555 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.940220 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-log-httpd\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.940302 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-ceilometer-tls-certs\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.940352 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-combined-ca-bundle\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.940488 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-config-data\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.940562 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-run-httpd\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.941071 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.941167 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.941690 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2km6d\" (UniqueName: \"kubernetes.io/projected/ab612687-28ad-42f2-b391-dd959596ef34-kube-api-access-2km6d\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.941750 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-sg-core-conf-yaml\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.941776 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-scripts\") pod \"ab612687-28ad-42f2-b391-dd959596ef34\" (UID: \"ab612687-28ad-42f2-b391-dd959596ef34\") " Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.942551 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.942578 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab612687-28ad-42f2-b391-dd959596ef34-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.948642 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-scripts" (OuterVolumeSpecName: "scripts") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.948965 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab612687-28ad-42f2-b391-dd959596ef34-kube-api-access-2km6d" (OuterVolumeSpecName: "kube-api-access-2km6d") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "kube-api-access-2km6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.972742 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.981160 4817 generic.go:334] "Generic (PLEG): container finished" podID="ab612687-28ad-42f2-b391-dd959596ef34" containerID="bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378" exitCode=0 Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.981211 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerDied","Data":"bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378"} Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.981241 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.981290 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab612687-28ad-42f2-b391-dd959596ef34","Type":"ContainerDied","Data":"5b91ceda1a17551bcae387c41305d09d964418fae74ec4af8961496786d51d64"} Mar 14 05:55:21 crc kubenswrapper[4817]: I0314 05:55:21.981316 4817 scope.go:117] "RemoveContainer" containerID="fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.002131 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.023028 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.045101 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2km6d\" (UniqueName: \"kubernetes.io/projected/ab612687-28ad-42f2-b391-dd959596ef34-kube-api-access-2km6d\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.045553 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.045856 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.045959 4817 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.046050 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.050660 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-config-data" (OuterVolumeSpecName: "config-data") pod "ab612687-28ad-42f2-b391-dd959596ef34" (UID: "ab612687-28ad-42f2-b391-dd959596ef34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.107220 4817 scope.go:117] "RemoveContainer" containerID="770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.132503 4817 scope.go:117] "RemoveContainer" containerID="beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.147569 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab612687-28ad-42f2-b391-dd959596ef34-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.160157 4817 scope.go:117] "RemoveContainer" containerID="bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.182025 4817 scope.go:117] "RemoveContainer" containerID="fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.182603 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01\": container with ID starting with fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01 not found: ID does not exist" containerID="fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.182641 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01"} err="failed to get container status \"fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01\": rpc error: code = NotFound desc = could not find container \"fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01\": container with ID starting with fd65d0fc8d7b3935dda6573eda600647f7d6c93f6241953e54683e005878db01 not found: ID does not exist" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.182672 4817 scope.go:117] "RemoveContainer" containerID="770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.183348 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e\": container with ID starting with 770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e not found: ID does not exist" containerID="770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.183409 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e"} err="failed to get container status \"770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e\": rpc error: code = NotFound desc = could not find container \"770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e\": container with ID starting with 770ab5f2466e2de75a39ea7a45de4ae09b7535aabaee251c040754efb5b9a70e not found: ID does not exist" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.183444 4817 scope.go:117] "RemoveContainer" containerID="beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.184186 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8\": container with ID starting with beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8 not found: ID does not exist" containerID="beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.184226 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8"} err="failed to get container status \"beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8\": rpc error: code = NotFound desc = could not find container \"beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8\": container with ID starting with beed14b8f68ad5d4b5c9ffc77f4341bc6a9a9aa422be7b625f3568aaeb06b8f8 not found: ID does not exist" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.184248 4817 scope.go:117] "RemoveContainer" containerID="bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.184646 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378\": container with ID starting with bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378 not found: ID does not exist" containerID="bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.184675 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378"} err="failed to get container status \"bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378\": rpc error: code = NotFound desc = could not find container \"bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378\": container with ID starting with bacbf2f7f0dd897edcfeef904921f233e92291e7b488818cdfbe8af786d7e378 not found: ID does not exist" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.367407 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.384426 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.409094 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.410117 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="sg-core" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410141 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="sg-core" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.410166 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="proxy-httpd" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410175 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="proxy-httpd" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.410207 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-central-agent" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410215 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-central-agent" Mar 14 05:55:22 crc kubenswrapper[4817]: E0314 05:55:22.410225 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-notification-agent" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410233 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-notification-agent" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410456 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="proxy-httpd" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410474 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-central-agent" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410492 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="sg-core" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.410517 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab612687-28ad-42f2-b391-dd959596ef34" containerName="ceilometer-notification-agent" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.412205 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.414978 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.415282 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.415549 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.419871 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.555709 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.555790 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.555874 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-run-httpd\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.555945 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.555968 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86zl7\" (UniqueName: \"kubernetes.io/projected/ec23a0c2-0437-4585-a6a8-b105a19467be-kube-api-access-86zl7\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.555987 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-log-httpd\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.556081 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-scripts\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.556123 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-config-data\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.657759 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-scripts\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.657856 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-config-data\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.657951 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.657997 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.658038 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-run-httpd\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.658093 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86zl7\" (UniqueName: \"kubernetes.io/projected/ec23a0c2-0437-4585-a6a8-b105a19467be-kube-api-access-86zl7\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.658117 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.658144 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-log-httpd\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.658703 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-log-httpd\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.660653 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-run-httpd\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.664962 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.665176 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.665579 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-scripts\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.666294 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.666331 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-config-data\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.681856 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86zl7\" (UniqueName: \"kubernetes.io/projected/ec23a0c2-0437-4585-a6a8-b105a19467be-kube-api-access-86zl7\") pod \"ceilometer-0\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.736637 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:55:22 crc kubenswrapper[4817]: I0314 05:55:22.745334 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab612687-28ad-42f2-b391-dd959596ef34" path="/var/lib/kubelet/pods/ab612687-28ad-42f2-b391-dd959596ef34/volumes" Mar 14 05:55:23 crc kubenswrapper[4817]: I0314 05:55:23.220184 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:55:24 crc kubenswrapper[4817]: I0314 05:55:24.007580 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerStarted","Data":"7ff592090378b40a03e1fc43f718d4485c7d18acb0a17aee0ee888fc26e8ded8"} Mar 14 05:55:25 crc kubenswrapper[4817]: I0314 05:55:25.018166 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerStarted","Data":"daf80c591e01cfbf76691d517685c21bc4b8b2dd87465f9a0f9c5a88f963ffbf"} Mar 14 05:55:25 crc kubenswrapper[4817]: I0314 05:55:25.022273 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" event={"ID":"0db362f9-2d12-48c2-b94c-e1406a811e1e","Type":"ContainerStarted","Data":"e656645f2ac5d938e5ca3226051b23ba8627cf27dcdb9f4fd66f38d24b174670"} Mar 14 05:55:25 crc kubenswrapper[4817]: I0314 05:55:25.047833 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" podStartSLOduration=2.033866903 podStartE2EDuration="44.047807488s" podCreationTimestamp="2026-03-14 05:54:41 +0000 UTC" firstStartedPulling="2026-03-14 05:54:42.679482886 +0000 UTC m=+1336.717743632" lastFinishedPulling="2026-03-14 05:55:24.693423461 +0000 UTC m=+1378.731684217" observedRunningTime="2026-03-14 05:55:25.039600682 +0000 UTC m=+1379.077861428" watchObservedRunningTime="2026-03-14 05:55:25.047807488 +0000 UTC m=+1379.086068234" Mar 14 05:55:26 crc kubenswrapper[4817]: I0314 05:55:26.036813 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerStarted","Data":"d54777900c58f892503d6061bc98e3248f356e37d013e9ea3a9bca4eba32c36f"} Mar 14 05:55:27 crc kubenswrapper[4817]: I0314 05:55:27.047789 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerStarted","Data":"bf49da20746451edbe9ce80cfe9618c2c4f5affb783a0f2d89f90d3c8392a2ad"} Mar 14 05:55:29 crc kubenswrapper[4817]: I0314 05:55:29.070537 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerStarted","Data":"87f7e3aa8d237609ea2b3e8a2a904d9601f8f477bcd4ebf6ad6d415d0b2131a0"} Mar 14 05:55:29 crc kubenswrapper[4817]: I0314 05:55:29.072707 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:55:29 crc kubenswrapper[4817]: I0314 05:55:29.112703 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.805324135 podStartE2EDuration="7.11268097s" podCreationTimestamp="2026-03-14 05:55:22 +0000 UTC" firstStartedPulling="2026-03-14 05:55:23.231204849 +0000 UTC m=+1377.269465595" lastFinishedPulling="2026-03-14 05:55:28.538561684 +0000 UTC m=+1382.576822430" observedRunningTime="2026-03-14 05:55:29.102640572 +0000 UTC m=+1383.140901328" watchObservedRunningTime="2026-03-14 05:55:29.11268097 +0000 UTC m=+1383.150941716" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.632504 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-phgz9"] Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.644165 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.650775 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-phgz9"] Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.752445 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-utilities\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.752837 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-catalog-content\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.752951 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjx8m\" (UniqueName: \"kubernetes.io/projected/fc659724-8af1-4a0d-abbc-e7f1d8190774-kube-api-access-pjx8m\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.855754 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-utilities\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.856265 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-catalog-content\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.856474 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjx8m\" (UniqueName: \"kubernetes.io/projected/fc659724-8af1-4a0d-abbc-e7f1d8190774-kube-api-access-pjx8m\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.856478 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-utilities\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.857122 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-catalog-content\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.893838 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjx8m\" (UniqueName: \"kubernetes.io/projected/fc659724-8af1-4a0d-abbc-e7f1d8190774-kube-api-access-pjx8m\") pod \"redhat-operators-phgz9\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:33 crc kubenswrapper[4817]: I0314 05:55:33.973680 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:34 crc kubenswrapper[4817]: I0314 05:55:34.272068 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-phgz9"] Mar 14 05:55:35 crc kubenswrapper[4817]: I0314 05:55:35.137370 4817 generic.go:334] "Generic (PLEG): container finished" podID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerID="8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39" exitCode=0 Mar 14 05:55:35 crc kubenswrapper[4817]: I0314 05:55:35.138708 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerDied","Data":"8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39"} Mar 14 05:55:35 crc kubenswrapper[4817]: I0314 05:55:35.138780 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerStarted","Data":"ef82e09fc7289bbd2ed9a5bb127f4240b9709d4822e97950b4e98d8654b5328f"} Mar 14 05:55:37 crc kubenswrapper[4817]: I0314 05:55:37.162488 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerStarted","Data":"31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9"} Mar 14 05:55:38 crc kubenswrapper[4817]: E0314 05:55:38.749808 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice/crio-conmon-31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9.scope\": RecentStats: unable to find data in memory cache]" Mar 14 05:55:39 crc kubenswrapper[4817]: I0314 05:55:39.653414 4817 generic.go:334] "Generic (PLEG): container finished" podID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerID="31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9" exitCode=0 Mar 14 05:55:39 crc kubenswrapper[4817]: I0314 05:55:39.653503 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerDied","Data":"31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9"} Mar 14 05:55:41 crc kubenswrapper[4817]: I0314 05:55:41.676683 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerStarted","Data":"344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae"} Mar 14 05:55:41 crc kubenswrapper[4817]: I0314 05:55:41.700241 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-phgz9" podStartSLOduration=3.074296511 podStartE2EDuration="8.700215583s" podCreationTimestamp="2026-03-14 05:55:33 +0000 UTC" firstStartedPulling="2026-03-14 05:55:35.140837374 +0000 UTC m=+1389.179098130" lastFinishedPulling="2026-03-14 05:55:40.766756456 +0000 UTC m=+1394.805017202" observedRunningTime="2026-03-14 05:55:41.696138976 +0000 UTC m=+1395.734399722" watchObservedRunningTime="2026-03-14 05:55:41.700215583 +0000 UTC m=+1395.738476329" Mar 14 05:55:43 crc kubenswrapper[4817]: I0314 05:55:43.974639 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:43 crc kubenswrapper[4817]: I0314 05:55:43.975359 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:55:44 crc kubenswrapper[4817]: I0314 05:55:44.706072 4817 generic.go:334] "Generic (PLEG): container finished" podID="0db362f9-2d12-48c2-b94c-e1406a811e1e" containerID="e656645f2ac5d938e5ca3226051b23ba8627cf27dcdb9f4fd66f38d24b174670" exitCode=0 Mar 14 05:55:44 crc kubenswrapper[4817]: I0314 05:55:44.706212 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" event={"ID":"0db362f9-2d12-48c2-b94c-e1406a811e1e","Type":"ContainerDied","Data":"e656645f2ac5d938e5ca3226051b23ba8627cf27dcdb9f4fd66f38d24b174670"} Mar 14 05:55:45 crc kubenswrapper[4817]: I0314 05:55:45.025881 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phgz9" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" probeResult="failure" output=< Mar 14 05:55:45 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:55:45 crc kubenswrapper[4817]: > Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.119281 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.186565 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cg9b\" (UniqueName: \"kubernetes.io/projected/0db362f9-2d12-48c2-b94c-e1406a811e1e-kube-api-access-7cg9b\") pod \"0db362f9-2d12-48c2-b94c-e1406a811e1e\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.186698 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-scripts\") pod \"0db362f9-2d12-48c2-b94c-e1406a811e1e\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.186753 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-combined-ca-bundle\") pod \"0db362f9-2d12-48c2-b94c-e1406a811e1e\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.186885 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-config-data\") pod \"0db362f9-2d12-48c2-b94c-e1406a811e1e\" (UID: \"0db362f9-2d12-48c2-b94c-e1406a811e1e\") " Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.202226 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db362f9-2d12-48c2-b94c-e1406a811e1e-kube-api-access-7cg9b" (OuterVolumeSpecName: "kube-api-access-7cg9b") pod "0db362f9-2d12-48c2-b94c-e1406a811e1e" (UID: "0db362f9-2d12-48c2-b94c-e1406a811e1e"). InnerVolumeSpecName "kube-api-access-7cg9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.203874 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-scripts" (OuterVolumeSpecName: "scripts") pod "0db362f9-2d12-48c2-b94c-e1406a811e1e" (UID: "0db362f9-2d12-48c2-b94c-e1406a811e1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.239288 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-config-data" (OuterVolumeSpecName: "config-data") pod "0db362f9-2d12-48c2-b94c-e1406a811e1e" (UID: "0db362f9-2d12-48c2-b94c-e1406a811e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.245137 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0db362f9-2d12-48c2-b94c-e1406a811e1e" (UID: "0db362f9-2d12-48c2-b94c-e1406a811e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.289268 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cg9b\" (UniqueName: \"kubernetes.io/projected/0db362f9-2d12-48c2-b94c-e1406a811e1e-kube-api-access-7cg9b\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.289311 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.289321 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.289334 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db362f9-2d12-48c2-b94c-e1406a811e1e-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.746423 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.747370 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hkrfb" event={"ID":"0db362f9-2d12-48c2-b94c-e1406a811e1e","Type":"ContainerDied","Data":"68d7ae74b99641b15cdd3c71d46bac2d0a89ca18b3eea6b875122e0445f58a69"} Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.747429 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d7ae74b99641b15cdd3c71d46bac2d0a89ca18b3eea6b875122e0445f58a69" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.917559 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 05:55:46 crc kubenswrapper[4817]: E0314 05:55:46.925315 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db362f9-2d12-48c2-b94c-e1406a811e1e" containerName="nova-cell0-conductor-db-sync" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.925363 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db362f9-2d12-48c2-b94c-e1406a811e1e" containerName="nova-cell0-conductor-db-sync" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.925702 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db362f9-2d12-48c2-b94c-e1406a811e1e" containerName="nova-cell0-conductor-db-sync" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.926501 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.930380 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-svjhn" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.931177 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 05:55:46 crc kubenswrapper[4817]: I0314 05:55:46.932417 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.002193 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffda2e9-2972-4633-aa0f-207e8095c237-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.002315 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmtxz\" (UniqueName: \"kubernetes.io/projected/4ffda2e9-2972-4633-aa0f-207e8095c237-kube-api-access-xmtxz\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.002465 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffda2e9-2972-4633-aa0f-207e8095c237-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.104460 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffda2e9-2972-4633-aa0f-207e8095c237-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.104575 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffda2e9-2972-4633-aa0f-207e8095c237-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.104647 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmtxz\" (UniqueName: \"kubernetes.io/projected/4ffda2e9-2972-4633-aa0f-207e8095c237-kube-api-access-xmtxz\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.114108 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ffda2e9-2972-4633-aa0f-207e8095c237-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.122188 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffda2e9-2972-4633-aa0f-207e8095c237-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.125448 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmtxz\" (UniqueName: \"kubernetes.io/projected/4ffda2e9-2972-4633-aa0f-207e8095c237-kube-api-access-xmtxz\") pod \"nova-cell0-conductor-0\" (UID: \"4ffda2e9-2972-4633-aa0f-207e8095c237\") " pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.248263 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.724411 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 05:55:47 crc kubenswrapper[4817]: I0314 05:55:47.760966 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4ffda2e9-2972-4633-aa0f-207e8095c237","Type":"ContainerStarted","Data":"8fc3ea1065bd0db87dbffb9b5f8d651ddbd78f667a22bec93ca919ec7194beb1"} Mar 14 05:55:48 crc kubenswrapper[4817]: I0314 05:55:48.771206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4ffda2e9-2972-4633-aa0f-207e8095c237","Type":"ContainerStarted","Data":"f02b6834785c9baac349f35ad35cf82aa46c55461dfc962888507fe2f2e176a4"} Mar 14 05:55:48 crc kubenswrapper[4817]: I0314 05:55:48.772956 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:48 crc kubenswrapper[4817]: I0314 05:55:48.799544 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7995213469999998 podStartE2EDuration="2.799521347s" podCreationTimestamp="2026-03-14 05:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:55:48.788348686 +0000 UTC m=+1402.826609452" watchObservedRunningTime="2026-03-14 05:55:48.799521347 +0000 UTC m=+1402.837782093" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.276411 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.746052 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.814353 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7kps9"] Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.816230 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.820057 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.827682 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.847060 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7kps9"] Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.921173 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.921266 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-scripts\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.921304 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-config-data\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:52 crc kubenswrapper[4817]: I0314 05:55:52.921349 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwcp\" (UniqueName: \"kubernetes.io/projected/723c4456-1e70-4425-b722-f3c68ae344b4-kube-api-access-qmwcp\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.022987 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.023084 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-scripts\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.023131 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-config-data\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.023186 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwcp\" (UniqueName: \"kubernetes.io/projected/723c4456-1e70-4425-b722-f3c68ae344b4-kube-api-access-qmwcp\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.033547 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-config-data\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.044928 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-scripts\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.054621 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.054695 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.056555 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.064433 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.076886 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.079014 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwcp\" (UniqueName: \"kubernetes.io/projected/723c4456-1e70-4425-b722-f3c68ae344b4-kube-api-access-qmwcp\") pod \"nova-cell0-cell-mapping-7kps9\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.109589 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.111515 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.120330 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124250 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmct\" (UniqueName: \"kubernetes.io/projected/906dd282-448d-410b-8537-e880ae34ae7d-kube-api-access-gpmct\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124310 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124334 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124388 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124574 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-config-data\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124615 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j28pn\" (UniqueName: \"kubernetes.io/projected/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-kube-api-access-j28pn\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.124648 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906dd282-448d-410b-8537-e880ae34ae7d-logs\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.130414 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.167733 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.227540 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.227838 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906dd282-448d-410b-8537-e880ae34ae7d-logs\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.234600 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmct\" (UniqueName: \"kubernetes.io/projected/906dd282-448d-410b-8537-e880ae34ae7d-kube-api-access-gpmct\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.234747 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.234839 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.235148 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.235495 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-config-data\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.235614 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j28pn\" (UniqueName: \"kubernetes.io/projected/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-kube-api-access-j28pn\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.237278 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.230356 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906dd282-448d-410b-8537-e880ae34ae7d-logs\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.244932 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.251420 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.255979 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-config-data\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.304808 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.310044 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j28pn\" (UniqueName: \"kubernetes.io/projected/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-kube-api-access-j28pn\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.321392 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmct\" (UniqueName: \"kubernetes.io/projected/906dd282-448d-410b-8537-e880ae34ae7d-kube-api-access-gpmct\") pod \"nova-api-0\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.324380 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.348972 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.388366 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.390887 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.397856 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.399415 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462331 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462491 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-config-data\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462563 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cb291d-a740-483a-9ebe-44ff67e5913a-logs\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462590 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-config-data\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462631 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462736 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwct8\" (UniqueName: \"kubernetes.io/projected/67cb291d-a740-483a-9ebe-44ff67e5913a-kube-api-access-xwct8\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.462854 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9q9v\" (UniqueName: \"kubernetes.io/projected/2a39395a-2fd5-44fc-97df-22477363ea93-kube-api-access-j9q9v\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.494429 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.528052 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-szcrw"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.533481 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.541859 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566556 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566650 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566725 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-config-data\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566754 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-config\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566799 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cb291d-a740-483a-9ebe-44ff67e5913a-logs\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566823 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-config-data\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566855 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566901 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.566961 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwct8\" (UniqueName: \"kubernetes.io/projected/67cb291d-a740-483a-9ebe-44ff67e5913a-kube-api-access-xwct8\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.567060 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.567291 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9q9v\" (UniqueName: \"kubernetes.io/projected/2a39395a-2fd5-44fc-97df-22477363ea93-kube-api-access-j9q9v\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.567378 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tfbf\" (UniqueName: \"kubernetes.io/projected/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-kube-api-access-9tfbf\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.568083 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cb291d-a740-483a-9ebe-44ff67e5913a-logs\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.578874 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-szcrw"] Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.579775 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.579802 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-config-data\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.582519 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.585966 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-config-data\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.587380 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwct8\" (UniqueName: \"kubernetes.io/projected/67cb291d-a740-483a-9ebe-44ff67e5913a-kube-api-access-xwct8\") pod \"nova-metadata-0\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.594071 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9q9v\" (UniqueName: \"kubernetes.io/projected/2a39395a-2fd5-44fc-97df-22477363ea93-kube-api-access-j9q9v\") pod \"nova-scheduler-0\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.669733 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.669859 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tfbf\" (UniqueName: \"kubernetes.io/projected/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-kube-api-access-9tfbf\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.669906 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.670022 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-config\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.670078 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.671276 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.671276 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.672093 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.672209 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-config\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.695735 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tfbf\" (UniqueName: \"kubernetes.io/projected/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-kube-api-access-9tfbf\") pod \"dnsmasq-dns-8b8cf6657-szcrw\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.717537 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.728327 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.892188 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:55:53 crc kubenswrapper[4817]: I0314 05:55:53.938476 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7kps9"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.246037 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.286502 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.295131 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j87wn"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.297378 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.331596 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.331677 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.333966 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j87wn"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.387283 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-config-data\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.387338 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.387453 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzfsh\" (UniqueName: \"kubernetes.io/projected/42b5af39-4042-4228-b1c4-5611d88b7256-kube-api-access-xzfsh\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.387476 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-scripts\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.489493 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzfsh\" (UniqueName: \"kubernetes.io/projected/42b5af39-4042-4228-b1c4-5611d88b7256-kube-api-access-xzfsh\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.489576 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-scripts\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.489771 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-config-data\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.489800 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.519878 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-config-data\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.530429 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.532720 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.551614 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-scripts\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.586451 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzfsh\" (UniqueName: \"kubernetes.io/projected/42b5af39-4042-4228-b1c4-5611d88b7256-kube-api-access-xzfsh\") pod \"nova-cell1-conductor-db-sync-j87wn\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: W0314 05:55:54.654048 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a39395a_2fd5_44fc_97df_22477363ea93.slice/crio-536b20f20d1c9f7d7cd70c77b81177c63ef027cc5a9b8f8f5520443321fef0ef WatchSource:0}: Error finding container 536b20f20d1c9f7d7cd70c77b81177c63ef027cc5a9b8f8f5520443321fef0ef: Status 404 returned error can't find the container with id 536b20f20d1c9f7d7cd70c77b81177c63ef027cc5a9b8f8f5520443321fef0ef Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.689001 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.699652 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.764117 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.784614 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-szcrw"] Mar 14 05:55:54 crc kubenswrapper[4817]: W0314 05:55:54.795421 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc70e40_3ca0_4fe5_b75b_5cea7c86f91c.slice/crio-c179fefe1655de4571b81b0d7024f0aa467771117db22e41bb7b47d1a4d1edc9 WatchSource:0}: Error finding container c179fefe1655de4571b81b0d7024f0aa467771117db22e41bb7b47d1a4d1edc9: Status 404 returned error can't find the container with id c179fefe1655de4571b81b0d7024f0aa467771117db22e41bb7b47d1a4d1edc9 Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.921206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7kps9" event={"ID":"723c4456-1e70-4425-b722-f3c68ae344b4","Type":"ContainerStarted","Data":"e46265dabb2dd2b68b37836133f920a0f45d7bd907ab51570000d4e3342ac7e1"} Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.921314 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7kps9" event={"ID":"723c4456-1e70-4425-b722-f3c68ae344b4","Type":"ContainerStarted","Data":"2002180b30dc5db935bf5b5bb6620bb38eabb2becc08ded6ffaa0af66ab3de66"} Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.923685 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4","Type":"ContainerStarted","Data":"6605888fa3110ba52aa090389305ad63c42c596c34525361d584fc4d06a748c6"} Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.953944 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a39395a-2fd5-44fc-97df-22477363ea93","Type":"ContainerStarted","Data":"536b20f20d1c9f7d7cd70c77b81177c63ef027cc5a9b8f8f5520443321fef0ef"} Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.955642 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"906dd282-448d-410b-8537-e880ae34ae7d","Type":"ContainerStarted","Data":"8f0f0355140df5ec7b17f72f5f81b109ae48d31638d2bb06bd496c3527b021fb"} Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.968591 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67cb291d-a740-483a-9ebe-44ff67e5913a","Type":"ContainerStarted","Data":"ab9f8e5b115263886de87a68605a5f5a9fc635b404a3c6bd24d5da603377622c"} Mar 14 05:55:54 crc kubenswrapper[4817]: I0314 05:55:54.972910 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" event={"ID":"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c","Type":"ContainerStarted","Data":"c179fefe1655de4571b81b0d7024f0aa467771117db22e41bb7b47d1a4d1edc9"} Mar 14 05:55:55 crc kubenswrapper[4817]: I0314 05:55:55.081731 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phgz9" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" probeResult="failure" output=< Mar 14 05:55:55 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:55:55 crc kubenswrapper[4817]: > Mar 14 05:55:55 crc kubenswrapper[4817]: I0314 05:55:55.179152 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7kps9" podStartSLOduration=3.179124463 podStartE2EDuration="3.179124463s" podCreationTimestamp="2026-03-14 05:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:55:54.949867609 +0000 UTC m=+1408.988128365" watchObservedRunningTime="2026-03-14 05:55:55.179124463 +0000 UTC m=+1409.217385209" Mar 14 05:55:55 crc kubenswrapper[4817]: I0314 05:55:55.179951 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j87wn"] Mar 14 05:55:55 crc kubenswrapper[4817]: I0314 05:55:55.998564 4817 generic.go:334] "Generic (PLEG): container finished" podID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerID="77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b" exitCode=0 Mar 14 05:55:55 crc kubenswrapper[4817]: I0314 05:55:55.998976 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" event={"ID":"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c","Type":"ContainerDied","Data":"77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b"} Mar 14 05:55:56 crc kubenswrapper[4817]: I0314 05:55:56.005032 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j87wn" event={"ID":"42b5af39-4042-4228-b1c4-5611d88b7256","Type":"ContainerStarted","Data":"fdb7ea26a0e3821e55461b3c6f4522bc9d645736b5270ae0cdd241a7fc0a6112"} Mar 14 05:55:56 crc kubenswrapper[4817]: I0314 05:55:56.005071 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j87wn" event={"ID":"42b5af39-4042-4228-b1c4-5611d88b7256","Type":"ContainerStarted","Data":"a83f4027bd8fcf1c89efc187e2daa9dfb0bef48a4a2f3959d1bde95b0d7ebe7b"} Mar 14 05:55:56 crc kubenswrapper[4817]: I0314 05:55:56.055731 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-j87wn" podStartSLOduration=2.055700036 podStartE2EDuration="2.055700036s" podCreationTimestamp="2026-03-14 05:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:55:56.041402176 +0000 UTC m=+1410.079662952" watchObservedRunningTime="2026-03-14 05:55:56.055700036 +0000 UTC m=+1410.093960782" Mar 14 05:55:56 crc kubenswrapper[4817]: I0314 05:55:56.930925 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:55:56 crc kubenswrapper[4817]: I0314 05:55:56.949152 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.151245 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557796-dcrbk"] Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.154516 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.156727 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.157417 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.157457 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.165944 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557796-dcrbk"] Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.349069 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbm8n\" (UniqueName: \"kubernetes.io/projected/d5689dfe-d803-4814-90cd-c8f530df370c-kube-api-access-zbm8n\") pod \"auto-csr-approver-29557796-dcrbk\" (UID: \"d5689dfe-d803-4814-90cd-c8f530df370c\") " pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.451865 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbm8n\" (UniqueName: \"kubernetes.io/projected/d5689dfe-d803-4814-90cd-c8f530df370c-kube-api-access-zbm8n\") pod \"auto-csr-approver-29557796-dcrbk\" (UID: \"d5689dfe-d803-4814-90cd-c8f530df370c\") " pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.474323 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbm8n\" (UniqueName: \"kubernetes.io/projected/d5689dfe-d803-4814-90cd-c8f530df370c-kube-api-access-zbm8n\") pod \"auto-csr-approver-29557796-dcrbk\" (UID: \"d5689dfe-d803-4814-90cd-c8f530df370c\") " pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:00 crc kubenswrapper[4817]: I0314 05:56:00.494331 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:02 crc kubenswrapper[4817]: I0314 05:56:02.980952 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557796-dcrbk"] Mar 14 05:56:02 crc kubenswrapper[4817]: W0314 05:56:02.989322 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5689dfe_d803_4814_90cd_c8f530df370c.slice/crio-7b0ea189ae8efc5a9a94a062e817f6a3fbd1cb3b0f41c2e4de5d2a1d242c058b WatchSource:0}: Error finding container 7b0ea189ae8efc5a9a94a062e817f6a3fbd1cb3b0f41c2e4de5d2a1d242c058b: Status 404 returned error can't find the container with id 7b0ea189ae8efc5a9a94a062e817f6a3fbd1cb3b0f41c2e4de5d2a1d242c058b Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.135169 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a39395a-2fd5-44fc-97df-22477363ea93","Type":"ContainerStarted","Data":"c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937"} Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.141975 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"906dd282-448d-410b-8537-e880ae34ae7d","Type":"ContainerStarted","Data":"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43"} Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.144094 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67cb291d-a740-483a-9ebe-44ff67e5913a","Type":"ContainerStarted","Data":"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb"} Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.148796 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" event={"ID":"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c","Type":"ContainerStarted","Data":"3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325"} Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.149038 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.153835 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4","Type":"ContainerStarted","Data":"317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9"} Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.158401 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9" gracePeriod=30 Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.161748 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" event={"ID":"d5689dfe-d803-4814-90cd-c8f530df370c","Type":"ContainerStarted","Data":"7b0ea189ae8efc5a9a94a062e817f6a3fbd1cb3b0f41c2e4de5d2a1d242c058b"} Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.166482 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.407202673 podStartE2EDuration="10.166452739s" podCreationTimestamp="2026-03-14 05:55:53 +0000 UTC" firstStartedPulling="2026-03-14 05:55:54.672045251 +0000 UTC m=+1408.710305997" lastFinishedPulling="2026-03-14 05:56:02.431295317 +0000 UTC m=+1416.469556063" observedRunningTime="2026-03-14 05:56:03.15567938 +0000 UTC m=+1417.193940136" watchObservedRunningTime="2026-03-14 05:56:03.166452739 +0000 UTC m=+1417.204713485" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.207276 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" podStartSLOduration=10.207245151 podStartE2EDuration="10.207245151s" podCreationTimestamp="2026-03-14 05:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:03.194412302 +0000 UTC m=+1417.232673048" watchObservedRunningTime="2026-03-14 05:56:03.207245151 +0000 UTC m=+1417.245505917" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.535102 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.718282 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.718548 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.755152 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 05:56:03 crc kubenswrapper[4817]: I0314 05:56:03.784956 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.663770811 podStartE2EDuration="10.784879989s" podCreationTimestamp="2026-03-14 05:55:53 +0000 UTC" firstStartedPulling="2026-03-14 05:55:54.28618299 +0000 UTC m=+1408.324443736" lastFinishedPulling="2026-03-14 05:56:02.407292168 +0000 UTC m=+1416.445552914" observedRunningTime="2026-03-14 05:56:03.225782243 +0000 UTC m=+1417.264042999" watchObservedRunningTime="2026-03-14 05:56:03.784879989 +0000 UTC m=+1417.823140735" Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.172956 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"906dd282-448d-410b-8537-e880ae34ae7d","Type":"ContainerStarted","Data":"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286"} Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.177834 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67cb291d-a740-483a-9ebe-44ff67e5913a","Type":"ContainerStarted","Data":"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c"} Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.178332 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-log" containerID="cri-o://a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb" gracePeriod=30 Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.178504 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-metadata" containerID="cri-o://73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c" gracePeriod=30 Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.209129 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.384285483 podStartE2EDuration="12.209102172s" podCreationTimestamp="2026-03-14 05:55:52 +0000 UTC" firstStartedPulling="2026-03-14 05:55:54.608881757 +0000 UTC m=+1408.647142503" lastFinishedPulling="2026-03-14 05:56:02.433698446 +0000 UTC m=+1416.471959192" observedRunningTime="2026-03-14 05:56:04.198433855 +0000 UTC m=+1418.236694601" watchObservedRunningTime="2026-03-14 05:56:04.209102172 +0000 UTC m=+1418.247362918" Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.240426 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.624966514 podStartE2EDuration="11.240203965s" podCreationTimestamp="2026-03-14 05:55:53 +0000 UTC" firstStartedPulling="2026-03-14 05:55:54.789682669 +0000 UTC m=+1408.827943405" lastFinishedPulling="2026-03-14 05:56:02.40492009 +0000 UTC m=+1416.443180856" observedRunningTime="2026-03-14 05:56:04.229160578 +0000 UTC m=+1418.267421324" watchObservedRunningTime="2026-03-14 05:56:04.240203965 +0000 UTC m=+1418.278464721" Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.251255 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.919988 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.989467 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cb291d-a740-483a-9ebe-44ff67e5913a-logs\") pod \"67cb291d-a740-483a-9ebe-44ff67e5913a\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.989547 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-config-data\") pod \"67cb291d-a740-483a-9ebe-44ff67e5913a\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.989605 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-combined-ca-bundle\") pod \"67cb291d-a740-483a-9ebe-44ff67e5913a\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.989635 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwct8\" (UniqueName: \"kubernetes.io/projected/67cb291d-a740-483a-9ebe-44ff67e5913a-kube-api-access-xwct8\") pod \"67cb291d-a740-483a-9ebe-44ff67e5913a\" (UID: \"67cb291d-a740-483a-9ebe-44ff67e5913a\") " Mar 14 05:56:04 crc kubenswrapper[4817]: I0314 05:56:04.991147 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67cb291d-a740-483a-9ebe-44ff67e5913a-logs" (OuterVolumeSpecName: "logs") pod "67cb291d-a740-483a-9ebe-44ff67e5913a" (UID: "67cb291d-a740-483a-9ebe-44ff67e5913a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.001092 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cb291d-a740-483a-9ebe-44ff67e5913a-kube-api-access-xwct8" (OuterVolumeSpecName: "kube-api-access-xwct8") pod "67cb291d-a740-483a-9ebe-44ff67e5913a" (UID: "67cb291d-a740-483a-9ebe-44ff67e5913a"). InnerVolumeSpecName "kube-api-access-xwct8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.027884 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-config-data" (OuterVolumeSpecName: "config-data") pod "67cb291d-a740-483a-9ebe-44ff67e5913a" (UID: "67cb291d-a740-483a-9ebe-44ff67e5913a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.031202 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67cb291d-a740-483a-9ebe-44ff67e5913a" (UID: "67cb291d-a740-483a-9ebe-44ff67e5913a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.050189 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phgz9" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" probeResult="failure" output=< Mar 14 05:56:05 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:56:05 crc kubenswrapper[4817]: > Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.093314 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cb291d-a740-483a-9ebe-44ff67e5913a-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.093399 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.093453 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cb291d-a740-483a-9ebe-44ff67e5913a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.093468 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwct8\" (UniqueName: \"kubernetes.io/projected/67cb291d-a740-483a-9ebe-44ff67e5913a-kube-api-access-xwct8\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.213513 4817 generic.go:334] "Generic (PLEG): container finished" podID="723c4456-1e70-4425-b722-f3c68ae344b4" containerID="e46265dabb2dd2b68b37836133f920a0f45d7bd907ab51570000d4e3342ac7e1" exitCode=0 Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.213667 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7kps9" event={"ID":"723c4456-1e70-4425-b722-f3c68ae344b4","Type":"ContainerDied","Data":"e46265dabb2dd2b68b37836133f920a0f45d7bd907ab51570000d4e3342ac7e1"} Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.250440 4817 generic.go:334] "Generic (PLEG): container finished" podID="d5689dfe-d803-4814-90cd-c8f530df370c" containerID="383726b63b64c5e1658e6a9df7c5948c3e9f28a42462512ab694bfd7aaffc552" exitCode=0 Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.250550 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" event={"ID":"d5689dfe-d803-4814-90cd-c8f530df370c","Type":"ContainerDied","Data":"383726b63b64c5e1658e6a9df7c5948c3e9f28a42462512ab694bfd7aaffc552"} Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.273685 4817 generic.go:334] "Generic (PLEG): container finished" podID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerID="73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c" exitCode=0 Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.274254 4817 generic.go:334] "Generic (PLEG): container finished" podID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerID="a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb" exitCode=143 Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.274416 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67cb291d-a740-483a-9ebe-44ff67e5913a","Type":"ContainerDied","Data":"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c"} Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.274480 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67cb291d-a740-483a-9ebe-44ff67e5913a","Type":"ContainerDied","Data":"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb"} Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.274493 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67cb291d-a740-483a-9ebe-44ff67e5913a","Type":"ContainerDied","Data":"ab9f8e5b115263886de87a68605a5f5a9fc635b404a3c6bd24d5da603377622c"} Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.274497 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.274512 4817 scope.go:117] "RemoveContainer" containerID="73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.319368 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.320210 4817 scope.go:117] "RemoveContainer" containerID="a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.336538 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.348102 4817 scope.go:117] "RemoveContainer" containerID="73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c" Mar 14 05:56:05 crc kubenswrapper[4817]: E0314 05:56:05.348542 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c\": container with ID starting with 73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c not found: ID does not exist" containerID="73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.348584 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c"} err="failed to get container status \"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c\": rpc error: code = NotFound desc = could not find container \"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c\": container with ID starting with 73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c not found: ID does not exist" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.348604 4817 scope.go:117] "RemoveContainer" containerID="a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb" Mar 14 05:56:05 crc kubenswrapper[4817]: E0314 05:56:05.349052 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb\": container with ID starting with a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb not found: ID does not exist" containerID="a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.349078 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb"} err="failed to get container status \"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb\": rpc error: code = NotFound desc = could not find container \"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb\": container with ID starting with a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb not found: ID does not exist" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.349089 4817 scope.go:117] "RemoveContainer" containerID="73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.351240 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c"} err="failed to get container status \"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c\": rpc error: code = NotFound desc = could not find container \"73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c\": container with ID starting with 73aea4510b09cf97bf1c22e3a7ad253715f6d46ab50156f7329e89bfded8e39c not found: ID does not exist" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.351294 4817 scope.go:117] "RemoveContainer" containerID="a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.351534 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.351714 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb"} err="failed to get container status \"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb\": rpc error: code = NotFound desc = could not find container \"a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb\": container with ID starting with a38232cf24d251063b38fb3f3d784c4f9822aac10b9269ba4d74ee720b2299eb not found: ID does not exist" Mar 14 05:56:05 crc kubenswrapper[4817]: E0314 05:56:05.352152 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-metadata" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.352181 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-metadata" Mar 14 05:56:05 crc kubenswrapper[4817]: E0314 05:56:05.352223 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-log" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.352233 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-log" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.352434 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-metadata" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.352488 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" containerName="nova-metadata-log" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.353603 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.359040 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.359824 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.382193 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.401958 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.402022 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-config-data\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.402060 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59p5f\" (UniqueName: \"kubernetes.io/projected/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-kube-api-access-59p5f\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.402120 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.402150 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-logs\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.504562 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59p5f\" (UniqueName: \"kubernetes.io/projected/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-kube-api-access-59p5f\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.504683 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.504720 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-logs\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.504803 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.504838 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-config-data\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.505576 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-logs\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.511429 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-config-data\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.512252 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.516937 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.528069 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59p5f\" (UniqueName: \"kubernetes.io/projected/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-kube-api-access-59p5f\") pod \"nova-metadata-0\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:05 crc kubenswrapper[4817]: I0314 05:56:05.687694 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.195886 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.340632 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d","Type":"ContainerStarted","Data":"4ae57c7cf8e336f14bd84b47a43fc11e23e0f50840b7ad191fc58714263249da"} Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.750809 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67cb291d-a740-483a-9ebe-44ff67e5913a" path="/var/lib/kubelet/pods/67cb291d-a740-483a-9ebe-44ff67e5913a/volumes" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.833339 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.938590 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-config-data\") pod \"723c4456-1e70-4425-b722-f3c68ae344b4\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.939272 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-scripts\") pod \"723c4456-1e70-4425-b722-f3c68ae344b4\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.939418 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-combined-ca-bundle\") pod \"723c4456-1e70-4425-b722-f3c68ae344b4\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.939493 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwcp\" (UniqueName: \"kubernetes.io/projected/723c4456-1e70-4425-b722-f3c68ae344b4-kube-api-access-qmwcp\") pod \"723c4456-1e70-4425-b722-f3c68ae344b4\" (UID: \"723c4456-1e70-4425-b722-f3c68ae344b4\") " Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.944429 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-scripts" (OuterVolumeSpecName: "scripts") pod "723c4456-1e70-4425-b722-f3c68ae344b4" (UID: "723c4456-1e70-4425-b722-f3c68ae344b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.948114 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723c4456-1e70-4425-b722-f3c68ae344b4-kube-api-access-qmwcp" (OuterVolumeSpecName: "kube-api-access-qmwcp") pod "723c4456-1e70-4425-b722-f3c68ae344b4" (UID: "723c4456-1e70-4425-b722-f3c68ae344b4"). InnerVolumeSpecName "kube-api-access-qmwcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.951702 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.976783 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-config-data" (OuterVolumeSpecName: "config-data") pod "723c4456-1e70-4425-b722-f3c68ae344b4" (UID: "723c4456-1e70-4425-b722-f3c68ae344b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:06 crc kubenswrapper[4817]: I0314 05:56:06.998879 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "723c4456-1e70-4425-b722-f3c68ae344b4" (UID: "723c4456-1e70-4425-b722-f3c68ae344b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.043601 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.043650 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.043664 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/723c4456-1e70-4425-b722-f3c68ae344b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.043678 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwcp\" (UniqueName: \"kubernetes.io/projected/723c4456-1e70-4425-b722-f3c68ae344b4-kube-api-access-qmwcp\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.145444 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbm8n\" (UniqueName: \"kubernetes.io/projected/d5689dfe-d803-4814-90cd-c8f530df370c-kube-api-access-zbm8n\") pod \"d5689dfe-d803-4814-90cd-c8f530df370c\" (UID: \"d5689dfe-d803-4814-90cd-c8f530df370c\") " Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.153360 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5689dfe-d803-4814-90cd-c8f530df370c-kube-api-access-zbm8n" (OuterVolumeSpecName: "kube-api-access-zbm8n") pod "d5689dfe-d803-4814-90cd-c8f530df370c" (UID: "d5689dfe-d803-4814-90cd-c8f530df370c"). InnerVolumeSpecName "kube-api-access-zbm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.247743 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbm8n\" (UniqueName: \"kubernetes.io/projected/d5689dfe-d803-4814-90cd-c8f530df370c-kube-api-access-zbm8n\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.355909 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" event={"ID":"d5689dfe-d803-4814-90cd-c8f530df370c","Type":"ContainerDied","Data":"7b0ea189ae8efc5a9a94a062e817f6a3fbd1cb3b0f41c2e4de5d2a1d242c058b"} Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.355973 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0ea189ae8efc5a9a94a062e817f6a3fbd1cb3b0f41c2e4de5d2a1d242c058b" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.356048 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557796-dcrbk" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.364773 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d","Type":"ContainerStarted","Data":"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4"} Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.364830 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d","Type":"ContainerStarted","Data":"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931"} Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.367486 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7kps9" event={"ID":"723c4456-1e70-4425-b722-f3c68ae344b4","Type":"ContainerDied","Data":"2002180b30dc5db935bf5b5bb6620bb38eabb2becc08ded6ffaa0af66ab3de66"} Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.367529 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2002180b30dc5db935bf5b5bb6620bb38eabb2becc08ded6ffaa0af66ab3de66" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.367601 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7kps9" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.395310 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.395292001 podStartE2EDuration="2.395292001s" podCreationTimestamp="2026-03-14 05:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:07.38760119 +0000 UTC m=+1421.425861946" watchObservedRunningTime="2026-03-14 05:56:07.395292001 +0000 UTC m=+1421.433552737" Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.549820 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.550179 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-log" containerID="cri-o://5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43" gracePeriod=30 Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.550819 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-api" containerID="cri-o://92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286" gracePeriod=30 Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.568959 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.569578 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2a39395a-2fd5-44fc-97df-22477363ea93" containerName="nova-scheduler-scheduler" containerID="cri-o://c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" gracePeriod=30 Mar 14 05:56:07 crc kubenswrapper[4817]: I0314 05:56:07.615018 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.060619 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557790-cdbfp"] Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.068735 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557790-cdbfp"] Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.185679 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.376811 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpmct\" (UniqueName: \"kubernetes.io/projected/906dd282-448d-410b-8537-e880ae34ae7d-kube-api-access-gpmct\") pod \"906dd282-448d-410b-8537-e880ae34ae7d\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.377103 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-config-data\") pod \"906dd282-448d-410b-8537-e880ae34ae7d\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.377160 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906dd282-448d-410b-8537-e880ae34ae7d-logs\") pod \"906dd282-448d-410b-8537-e880ae34ae7d\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.377190 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-combined-ca-bundle\") pod \"906dd282-448d-410b-8537-e880ae34ae7d\" (UID: \"906dd282-448d-410b-8537-e880ae34ae7d\") " Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.378622 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906dd282-448d-410b-8537-e880ae34ae7d-logs" (OuterVolumeSpecName: "logs") pod "906dd282-448d-410b-8537-e880ae34ae7d" (UID: "906dd282-448d-410b-8537-e880ae34ae7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.388590 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906dd282-448d-410b-8537-e880ae34ae7d-kube-api-access-gpmct" (OuterVolumeSpecName: "kube-api-access-gpmct") pod "906dd282-448d-410b-8537-e880ae34ae7d" (UID: "906dd282-448d-410b-8537-e880ae34ae7d"). InnerVolumeSpecName "kube-api-access-gpmct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.389477 4817 generic.go:334] "Generic (PLEG): container finished" podID="906dd282-448d-410b-8537-e880ae34ae7d" containerID="92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286" exitCode=0 Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.389515 4817 generic.go:334] "Generic (PLEG): container finished" podID="906dd282-448d-410b-8537-e880ae34ae7d" containerID="5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43" exitCode=143 Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.389980 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"906dd282-448d-410b-8537-e880ae34ae7d","Type":"ContainerDied","Data":"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286"} Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.390026 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"906dd282-448d-410b-8537-e880ae34ae7d","Type":"ContainerDied","Data":"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43"} Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.390029 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.390053 4817 scope.go:117] "RemoveContainer" containerID="92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.390039 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"906dd282-448d-410b-8537-e880ae34ae7d","Type":"ContainerDied","Data":"8f0f0355140df5ec7b17f72f5f81b109ae48d31638d2bb06bd496c3527b021fb"} Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.413732 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "906dd282-448d-410b-8537-e880ae34ae7d" (UID: "906dd282-448d-410b-8537-e880ae34ae7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.436969 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-config-data" (OuterVolumeSpecName: "config-data") pod "906dd282-448d-410b-8537-e880ae34ae7d" (UID: "906dd282-448d-410b-8537-e880ae34ae7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.481662 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.481707 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906dd282-448d-410b-8537-e880ae34ae7d-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.481720 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906dd282-448d-410b-8537-e880ae34ae7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.481734 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpmct\" (UniqueName: \"kubernetes.io/projected/906dd282-448d-410b-8537-e880ae34ae7d-kube-api-access-gpmct\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.574543 4817 scope.go:117] "RemoveContainer" containerID="5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.596239 4817 scope.go:117] "RemoveContainer" containerID="92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286" Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.597170 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286\": container with ID starting with 92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286 not found: ID does not exist" containerID="92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.597208 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286"} err="failed to get container status \"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286\": rpc error: code = NotFound desc = could not find container \"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286\": container with ID starting with 92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286 not found: ID does not exist" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.597229 4817 scope.go:117] "RemoveContainer" containerID="5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43" Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.597565 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43\": container with ID starting with 5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43 not found: ID does not exist" containerID="5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.597589 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43"} err="failed to get container status \"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43\": rpc error: code = NotFound desc = could not find container \"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43\": container with ID starting with 5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43 not found: ID does not exist" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.597605 4817 scope.go:117] "RemoveContainer" containerID="92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.598074 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286"} err="failed to get container status \"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286\": rpc error: code = NotFound desc = could not find container \"92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286\": container with ID starting with 92fcc0a946cbe35353f7d4de69aabd4bd6a68f5c3a5c85c6de563331c65e3286 not found: ID does not exist" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.598099 4817 scope.go:117] "RemoveContainer" containerID="5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.598345 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43"} err="failed to get container status \"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43\": rpc error: code = NotFound desc = could not find container \"5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43\": container with ID starting with 5b0b42c0ca42e852da999239dc85d699c05f3859d1c9140003bf7c7180d17d43 not found: ID does not exist" Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.725421 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937 is running failed: container process not found" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.725724 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937 is running failed: container process not found" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.725963 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937 is running failed: container process not found" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.725992 4817 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2a39395a-2fd5-44fc-97df-22477363ea93" containerName="nova-scheduler-scheduler" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.748129 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0edb65ab-7787-4204-bf56-c6ab3bc4e1f1" path="/var/lib/kubelet/pods/0edb65ab-7787-4204-bf56-c6ab3bc4e1f1/volumes" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.773244 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.785083 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.815653 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.816213 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5689dfe-d803-4814-90cd-c8f530df370c" containerName="oc" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816231 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5689dfe-d803-4814-90cd-c8f530df370c" containerName="oc" Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.816250 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-log" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816256 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-log" Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.816279 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-api" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816286 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-api" Mar 14 05:56:08 crc kubenswrapper[4817]: E0314 05:56:08.816302 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723c4456-1e70-4425-b722-f3c68ae344b4" containerName="nova-manage" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816308 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="723c4456-1e70-4425-b722-f3c68ae344b4" containerName="nova-manage" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816533 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-api" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816573 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5689dfe-d803-4814-90cd-c8f530df370c" containerName="oc" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816588 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="906dd282-448d-410b-8537-e880ae34ae7d" containerName="nova-api-log" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.816609 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="723c4456-1e70-4425-b722-f3c68ae344b4" containerName="nova-manage" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.818918 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.822802 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.830257 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.892592 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.892667 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-config-data\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.892753 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42w2n\" (UniqueName: \"kubernetes.io/projected/a0abdc8b-208d-4494-9946-fde5855b7376-kube-api-access-42w2n\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.892816 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0abdc8b-208d-4494-9946-fde5855b7376-logs\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:08 crc kubenswrapper[4817]: I0314 05:56:08.896285 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.002026 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.002097 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-config-data\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.002146 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42w2n\" (UniqueName: \"kubernetes.io/projected/a0abdc8b-208d-4494-9946-fde5855b7376-kube-api-access-42w2n\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.002192 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0abdc8b-208d-4494-9946-fde5855b7376-logs\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.002425 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-mqbx6"] Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.002714 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="dnsmasq-dns" containerID="cri-o://fc697f9a3ed95fe6fa6525a50d84486d7042a15f553948493395945f58b02057" gracePeriod=10 Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.004571 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0abdc8b-208d-4494-9946-fde5855b7376-logs\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.019619 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-config-data\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.025776 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.026585 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42w2n\" (UniqueName: \"kubernetes.io/projected/a0abdc8b-208d-4494-9946-fde5855b7376-kube-api-access-42w2n\") pod \"nova-api-0\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.123357 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.199113 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.320436 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.413010 4817 generic.go:334] "Generic (PLEG): container finished" podID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerID="fc697f9a3ed95fe6fa6525a50d84486d7042a15f553948493395945f58b02057" exitCode=0 Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.413715 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" event={"ID":"75d14374-986c-4b2c-8d2c-aa97aaee29fe","Type":"ContainerDied","Data":"fc697f9a3ed95fe6fa6525a50d84486d7042a15f553948493395945f58b02057"} Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.419339 4817 generic.go:334] "Generic (PLEG): container finished" podID="2a39395a-2fd5-44fc-97df-22477363ea93" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" exitCode=0 Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.419389 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.419400 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a39395a-2fd5-44fc-97df-22477363ea93","Type":"ContainerDied","Data":"c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937"} Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.419467 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2a39395a-2fd5-44fc-97df-22477363ea93","Type":"ContainerDied","Data":"536b20f20d1c9f7d7cd70c77b81177c63ef027cc5a9b8f8f5520443321fef0ef"} Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.419495 4817 scope.go:117] "RemoveContainer" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.421286 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9q9v\" (UniqueName: \"kubernetes.io/projected/2a39395a-2fd5-44fc-97df-22477363ea93-kube-api-access-j9q9v\") pod \"2a39395a-2fd5-44fc-97df-22477363ea93\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.421629 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-combined-ca-bundle\") pod \"2a39395a-2fd5-44fc-97df-22477363ea93\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.422795 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-config-data\") pod \"2a39395a-2fd5-44fc-97df-22477363ea93\" (UID: \"2a39395a-2fd5-44fc-97df-22477363ea93\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.425098 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-log" containerID="cri-o://b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931" gracePeriod=30 Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.425483 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-metadata" containerID="cri-o://a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4" gracePeriod=30 Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.430841 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a39395a-2fd5-44fc-97df-22477363ea93-kube-api-access-j9q9v" (OuterVolumeSpecName: "kube-api-access-j9q9v") pod "2a39395a-2fd5-44fc-97df-22477363ea93" (UID: "2a39395a-2fd5-44fc-97df-22477363ea93"). InnerVolumeSpecName "kube-api-access-j9q9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.444655 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.476205 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-config-data" (OuterVolumeSpecName: "config-data") pod "2a39395a-2fd5-44fc-97df-22477363ea93" (UID: "2a39395a-2fd5-44fc-97df-22477363ea93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.483785 4817 scope.go:117] "RemoveContainer" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" Mar 14 05:56:09 crc kubenswrapper[4817]: E0314 05:56:09.484293 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937\": container with ID starting with c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937 not found: ID does not exist" containerID="c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.484325 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937"} err="failed to get container status \"c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937\": rpc error: code = NotFound desc = could not find container \"c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937\": container with ID starting with c320a23cb0d18a85e0ca33171a70c6d7c0fb8321ed0482a90ada092ac94f3937 not found: ID does not exist" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.494982 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a39395a-2fd5-44fc-97df-22477363ea93" (UID: "2a39395a-2fd5-44fc-97df-22477363ea93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.525508 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.525551 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a39395a-2fd5-44fc-97df-22477363ea93-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.525562 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9q9v\" (UniqueName: \"kubernetes.io/projected/2a39395a-2fd5-44fc-97df-22477363ea93-kube-api-access-j9q9v\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.626721 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-nb\") pod \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.627099 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-config\") pod \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.627187 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-sb\") pod \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.627209 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfqfg\" (UniqueName: \"kubernetes.io/projected/75d14374-986c-4b2c-8d2c-aa97aaee29fe-kube-api-access-bfqfg\") pod \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.627289 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-dns-svc\") pod \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\" (UID: \"75d14374-986c-4b2c-8d2c-aa97aaee29fe\") " Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.667472 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d14374-986c-4b2c-8d2c-aa97aaee29fe-kube-api-access-bfqfg" (OuterVolumeSpecName: "kube-api-access-bfqfg") pod "75d14374-986c-4b2c-8d2c-aa97aaee29fe" (UID: "75d14374-986c-4b2c-8d2c-aa97aaee29fe"). InnerVolumeSpecName "kube-api-access-bfqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.729714 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75d14374-986c-4b2c-8d2c-aa97aaee29fe" (UID: "75d14374-986c-4b2c-8d2c-aa97aaee29fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.732484 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.732544 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfqfg\" (UniqueName: \"kubernetes.io/projected/75d14374-986c-4b2c-8d2c-aa97aaee29fe-kube-api-access-bfqfg\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.732818 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-config" (OuterVolumeSpecName: "config") pod "75d14374-986c-4b2c-8d2c-aa97aaee29fe" (UID: "75d14374-986c-4b2c-8d2c-aa97aaee29fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: E0314 05:56:09.803517 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fa2bc59_9cd2_4cc0_a1f0_461cd0c1771d.slice/crio-b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931.scope\": RecentStats: unable to find data in memory cache]" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.810817 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75d14374-986c-4b2c-8d2c-aa97aaee29fe" (UID: "75d14374-986c-4b2c-8d2c-aa97aaee29fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.824771 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75d14374-986c-4b2c-8d2c-aa97aaee29fe" (UID: "75d14374-986c-4b2c-8d2c-aa97aaee29fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.840498 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.840542 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.840553 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75d14374-986c-4b2c-8d2c-aa97aaee29fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.877105 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:09 crc kubenswrapper[4817]: I0314 05:56:09.975974 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.031976 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.046944 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.047907 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a39395a-2fd5-44fc-97df-22477363ea93" containerName="nova-scheduler-scheduler" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.048038 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a39395a-2fd5-44fc-97df-22477363ea93" containerName="nova-scheduler-scheduler" Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.048159 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="dnsmasq-dns" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.048247 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="dnsmasq-dns" Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.048327 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="init" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.048398 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="init" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.048657 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a39395a-2fd5-44fc-97df-22477363ea93" containerName="nova-scheduler-scheduler" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.048762 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" containerName="dnsmasq-dns" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.049516 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.053668 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.083326 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.150855 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4p9w\" (UniqueName: \"kubernetes.io/projected/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-kube-api-access-t4p9w\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.151736 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.151873 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-config-data\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.254583 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4p9w\" (UniqueName: \"kubernetes.io/projected/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-kube-api-access-t4p9w\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.254694 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.254730 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-config-data\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.263109 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.263671 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-config-data\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.286063 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4p9w\" (UniqueName: \"kubernetes.io/projected/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-kube-api-access-t4p9w\") pod \"nova-scheduler-0\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.359998 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.425017 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.457699 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-nova-metadata-tls-certs\") pod \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.457928 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-combined-ca-bundle\") pod \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.458044 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-config-data\") pod \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.458080 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-logs\") pod \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.458128 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59p5f\" (UniqueName: \"kubernetes.io/projected/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-kube-api-access-59p5f\") pod \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\" (UID: \"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d\") " Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.459338 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-logs" (OuterVolumeSpecName: "logs") pod "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" (UID: "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.482170 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-kube-api-access-59p5f" (OuterVolumeSpecName: "kube-api-access-59p5f") pod "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" (UID: "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d"). InnerVolumeSpecName "kube-api-access-59p5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485515 4817 generic.go:334] "Generic (PLEG): container finished" podID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerID="a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4" exitCode=0 Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485553 4817 generic.go:334] "Generic (PLEG): container finished" podID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerID="b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931" exitCode=143 Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485611 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d","Type":"ContainerDied","Data":"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4"} Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485650 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d","Type":"ContainerDied","Data":"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931"} Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485665 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d","Type":"ContainerDied","Data":"4ae57c7cf8e336f14bd84b47a43fc11e23e0f50840b7ad191fc58714263249da"} Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485686 4817 scope.go:117] "RemoveContainer" containerID="a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.485929 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.498564 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0abdc8b-208d-4494-9946-fde5855b7376","Type":"ContainerStarted","Data":"073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311"} Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.498674 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0abdc8b-208d-4494-9946-fde5855b7376","Type":"ContainerStarted","Data":"98df0839723c96e21eb7a5f2fe5c51388e6f60e35481c4412c97964c173e2483"} Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.503667 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-config-data" (OuterVolumeSpecName: "config-data") pod "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" (UID: "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.507932 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" (UID: "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.509845 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" event={"ID":"75d14374-986c-4b2c-8d2c-aa97aaee29fe","Type":"ContainerDied","Data":"97fdd5497f773f7e985bc108fa1a02413cf6f709e4477930bce8af8e71b7c702"} Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.510070 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-mqbx6" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.533424 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" (UID: "6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.543987 4817 scope.go:117] "RemoveContainer" containerID="b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.564594 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-mqbx6"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.565107 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59p5f\" (UniqueName: \"kubernetes.io/projected/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-kube-api-access-59p5f\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.565151 4817 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.565165 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.565177 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.565190 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.577466 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-mqbx6"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.600881 4817 scope.go:117] "RemoveContainer" containerID="a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4" Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.604068 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4\": container with ID starting with a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4 not found: ID does not exist" containerID="a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.604136 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4"} err="failed to get container status \"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4\": rpc error: code = NotFound desc = could not find container \"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4\": container with ID starting with a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4 not found: ID does not exist" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.604169 4817 scope.go:117] "RemoveContainer" containerID="b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931" Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.605156 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931\": container with ID starting with b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931 not found: ID does not exist" containerID="b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.605203 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931"} err="failed to get container status \"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931\": rpc error: code = NotFound desc = could not find container \"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931\": container with ID starting with b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931 not found: ID does not exist" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.605231 4817 scope.go:117] "RemoveContainer" containerID="a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.605796 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4"} err="failed to get container status \"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4\": rpc error: code = NotFound desc = could not find container \"a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4\": container with ID starting with a5c60fbc81f09b2acf6ef26f784db13a51c509b281eff500cd113eb8dad16cc4 not found: ID does not exist" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.605826 4817 scope.go:117] "RemoveContainer" containerID="b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.606321 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931"} err="failed to get container status \"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931\": rpc error: code = NotFound desc = could not find container \"b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931\": container with ID starting with b6ef56abb8065ab897075907095bfb6c8a814bb4469d6920fbc868e78ddb4931 not found: ID does not exist" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.606345 4817 scope.go:117] "RemoveContainer" containerID="fc697f9a3ed95fe6fa6525a50d84486d7042a15f553948493395945f58b02057" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.635321 4817 scope.go:117] "RemoveContainer" containerID="b39a6be9d8d4cd39df181e100d1f30125b368da7ffae2b7ff89e20b9ffc96d87" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.748153 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a39395a-2fd5-44fc-97df-22477363ea93" path="/var/lib/kubelet/pods/2a39395a-2fd5-44fc-97df-22477363ea93/volumes" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.748689 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d14374-986c-4b2c-8d2c-aa97aaee29fe" path="/var/lib/kubelet/pods/75d14374-986c-4b2c-8d2c-aa97aaee29fe/volumes" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.749503 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906dd282-448d-410b-8537-e880ae34ae7d" path="/var/lib/kubelet/pods/906dd282-448d-410b-8537-e880ae34ae7d/volumes" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.820424 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.837270 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.852678 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.853278 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-log" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.853295 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-log" Mar 14 05:56:10 crc kubenswrapper[4817]: E0314 05:56:10.853305 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-metadata" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.853312 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-metadata" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.853539 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-log" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.853567 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" containerName="nova-metadata-metadata" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.854747 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.859329 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.859953 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.892159 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.953036 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.974224 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.974402 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c759869-bcda-40b9-83b4-ba6d9f53f57d-logs\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.974555 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.975610 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-config-data\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:10 crc kubenswrapper[4817]: I0314 05:56:10.975722 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfx8c\" (UniqueName: \"kubernetes.io/projected/8c759869-bcda-40b9-83b4-ba6d9f53f57d-kube-api-access-nfx8c\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.080479 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-config-data\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.081053 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfx8c\" (UniqueName: \"kubernetes.io/projected/8c759869-bcda-40b9-83b4-ba6d9f53f57d-kube-api-access-nfx8c\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.081103 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c759869-bcda-40b9-83b4-ba6d9f53f57d-logs\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.081139 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.081175 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.082221 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c759869-bcda-40b9-83b4-ba6d9f53f57d-logs\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.089708 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-config-data\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.092712 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.093405 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.102860 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfx8c\" (UniqueName: \"kubernetes.io/projected/8c759869-bcda-40b9-83b4-ba6d9f53f57d-kube-api-access-nfx8c\") pod \"nova-metadata-0\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.207553 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.525737 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0abdc8b-208d-4494-9946-fde5855b7376","Type":"ContainerStarted","Data":"1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531"} Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.527949 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636","Type":"ContainerStarted","Data":"1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c"} Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.528000 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636","Type":"ContainerStarted","Data":"b7503f344eb7931ee0d486dcffd4fb4145319a7fccbce300dd020caf410cd55a"} Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.532531 4817 generic.go:334] "Generic (PLEG): container finished" podID="42b5af39-4042-4228-b1c4-5611d88b7256" containerID="fdb7ea26a0e3821e55461b3c6f4522bc9d645736b5270ae0cdd241a7fc0a6112" exitCode=0 Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.532651 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j87wn" event={"ID":"42b5af39-4042-4228-b1c4-5611d88b7256","Type":"ContainerDied","Data":"fdb7ea26a0e3821e55461b3c6f4522bc9d645736b5270ae0cdd241a7fc0a6112"} Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.550967 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.5509410409999997 podStartE2EDuration="3.550941041s" podCreationTimestamp="2026-03-14 05:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:11.548018507 +0000 UTC m=+1425.586279253" watchObservedRunningTime="2026-03-14 05:56:11.550941041 +0000 UTC m=+1425.589201787" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.582694 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.582667892 podStartE2EDuration="2.582667892s" podCreationTimestamp="2026-03-14 05:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:11.570867133 +0000 UTC m=+1425.609127899" watchObservedRunningTime="2026-03-14 05:56:11.582667892 +0000 UTC m=+1425.620928638" Mar 14 05:56:11 crc kubenswrapper[4817]: I0314 05:56:11.701008 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:11 crc kubenswrapper[4817]: W0314 05:56:11.706473 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c759869_bcda_40b9_83b4_ba6d9f53f57d.slice/crio-0ce467c772b6faeda391d2b7731bfb619f662ac05da7e2f3f8e8c4ab6f592e7e WatchSource:0}: Error finding container 0ce467c772b6faeda391d2b7731bfb619f662ac05da7e2f3f8e8c4ab6f592e7e: Status 404 returned error can't find the container with id 0ce467c772b6faeda391d2b7731bfb619f662ac05da7e2f3f8e8c4ab6f592e7e Mar 14 05:56:12 crc kubenswrapper[4817]: I0314 05:56:12.548148 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c759869-bcda-40b9-83b4-ba6d9f53f57d","Type":"ContainerStarted","Data":"22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32"} Mar 14 05:56:12 crc kubenswrapper[4817]: I0314 05:56:12.548972 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c759869-bcda-40b9-83b4-ba6d9f53f57d","Type":"ContainerStarted","Data":"16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb"} Mar 14 05:56:12 crc kubenswrapper[4817]: I0314 05:56:12.549000 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c759869-bcda-40b9-83b4-ba6d9f53f57d","Type":"ContainerStarted","Data":"0ce467c772b6faeda391d2b7731bfb619f662ac05da7e2f3f8e8c4ab6f592e7e"} Mar 14 05:56:12 crc kubenswrapper[4817]: I0314 05:56:12.578037 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.578001929 podStartE2EDuration="2.578001929s" podCreationTimestamp="2026-03-14 05:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:12.575522288 +0000 UTC m=+1426.613783034" watchObservedRunningTime="2026-03-14 05:56:12.578001929 +0000 UTC m=+1426.616262675" Mar 14 05:56:12 crc kubenswrapper[4817]: I0314 05:56:12.773423 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d" path="/var/lib/kubelet/pods/6fa2bc59-9cd2-4cc0-a1f0-461cd0c1771d/volumes" Mar 14 05:56:12 crc kubenswrapper[4817]: I0314 05:56:12.960386 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.021272 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-scripts\") pod \"42b5af39-4042-4228-b1c4-5611d88b7256\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.021436 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-combined-ca-bundle\") pod \"42b5af39-4042-4228-b1c4-5611d88b7256\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.021482 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzfsh\" (UniqueName: \"kubernetes.io/projected/42b5af39-4042-4228-b1c4-5611d88b7256-kube-api-access-xzfsh\") pod \"42b5af39-4042-4228-b1c4-5611d88b7256\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.021647 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-config-data\") pod \"42b5af39-4042-4228-b1c4-5611d88b7256\" (UID: \"42b5af39-4042-4228-b1c4-5611d88b7256\") " Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.037147 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-scripts" (OuterVolumeSpecName: "scripts") pod "42b5af39-4042-4228-b1c4-5611d88b7256" (UID: "42b5af39-4042-4228-b1c4-5611d88b7256"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.037257 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b5af39-4042-4228-b1c4-5611d88b7256-kube-api-access-xzfsh" (OuterVolumeSpecName: "kube-api-access-xzfsh") pod "42b5af39-4042-4228-b1c4-5611d88b7256" (UID: "42b5af39-4042-4228-b1c4-5611d88b7256"). InnerVolumeSpecName "kube-api-access-xzfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.057169 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42b5af39-4042-4228-b1c4-5611d88b7256" (UID: "42b5af39-4042-4228-b1c4-5611d88b7256"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.057344 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-config-data" (OuterVolumeSpecName: "config-data") pod "42b5af39-4042-4228-b1c4-5611d88b7256" (UID: "42b5af39-4042-4228-b1c4-5611d88b7256"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.124771 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.124853 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.124873 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b5af39-4042-4228-b1c4-5611d88b7256-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.124885 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzfsh\" (UniqueName: \"kubernetes.io/projected/42b5af39-4042-4228-b1c4-5611d88b7256-kube-api-access-xzfsh\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.561105 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j87wn" event={"ID":"42b5af39-4042-4228-b1c4-5611d88b7256","Type":"ContainerDied","Data":"a83f4027bd8fcf1c89efc187e2daa9dfb0bef48a4a2f3959d1bde95b0d7ebe7b"} Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.561435 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83f4027bd8fcf1c89efc187e2daa9dfb0bef48a4a2f3959d1bde95b0d7ebe7b" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.561146 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j87wn" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.680520 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 05:56:13 crc kubenswrapper[4817]: E0314 05:56:13.681069 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b5af39-4042-4228-b1c4-5611d88b7256" containerName="nova-cell1-conductor-db-sync" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.681096 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b5af39-4042-4228-b1c4-5611d88b7256" containerName="nova-cell1-conductor-db-sync" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.681339 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b5af39-4042-4228-b1c4-5611d88b7256" containerName="nova-cell1-conductor-db-sync" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.682183 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.686360 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.693864 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.737052 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqc2k\" (UniqueName: \"kubernetes.io/projected/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-kube-api-access-cqc2k\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.737478 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.737639 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.839283 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.839366 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.839433 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqc2k\" (UniqueName: \"kubernetes.io/projected/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-kube-api-access-cqc2k\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.845183 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.847706 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:13 crc kubenswrapper[4817]: I0314 05:56:13.858292 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqc2k\" (UniqueName: \"kubernetes.io/projected/5af4f1fd-b11b-42bd-ba99-0a9658136ea0-kube-api-access-cqc2k\") pod \"nova-cell1-conductor-0\" (UID: \"5af4f1fd-b11b-42bd-ba99-0a9658136ea0\") " pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:14 crc kubenswrapper[4817]: I0314 05:56:14.003134 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:14 crc kubenswrapper[4817]: W0314 05:56:14.542321 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af4f1fd_b11b_42bd_ba99_0a9658136ea0.slice/crio-a3d0255a0299b6e2a9e09edaebbbf8c451503572a6c3ff746a5f40a970040984 WatchSource:0}: Error finding container a3d0255a0299b6e2a9e09edaebbbf8c451503572a6c3ff746a5f40a970040984: Status 404 returned error can't find the container with id a3d0255a0299b6e2a9e09edaebbbf8c451503572a6c3ff746a5f40a970040984 Mar 14 05:56:14 crc kubenswrapper[4817]: I0314 05:56:14.544153 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 05:56:14 crc kubenswrapper[4817]: I0314 05:56:14.585227 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5af4f1fd-b11b-42bd-ba99-0a9658136ea0","Type":"ContainerStarted","Data":"a3d0255a0299b6e2a9e09edaebbbf8c451503572a6c3ff746a5f40a970040984"} Mar 14 05:56:15 crc kubenswrapper[4817]: I0314 05:56:15.030959 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phgz9" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" probeResult="failure" output=< Mar 14 05:56:15 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 05:56:15 crc kubenswrapper[4817]: > Mar 14 05:56:15 crc kubenswrapper[4817]: I0314 05:56:15.426000 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 05:56:15 crc kubenswrapper[4817]: I0314 05:56:15.599983 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5af4f1fd-b11b-42bd-ba99-0a9658136ea0","Type":"ContainerStarted","Data":"24f0999b690062f91706365a4b7985cd98fae807074cb946f8a5fc8fe7eec0fd"} Mar 14 05:56:15 crc kubenswrapper[4817]: I0314 05:56:15.601093 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:15 crc kubenswrapper[4817]: I0314 05:56:15.624598 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.624574986 podStartE2EDuration="2.624574986s" podCreationTimestamp="2026-03-14 05:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:15.617126314 +0000 UTC m=+1429.655387070" watchObservedRunningTime="2026-03-14 05:56:15.624574986 +0000 UTC m=+1429.662835742" Mar 14 05:56:19 crc kubenswrapper[4817]: I0314 05:56:19.032450 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 14 05:56:19 crc kubenswrapper[4817]: I0314 05:56:19.199932 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 05:56:19 crc kubenswrapper[4817]: I0314 05:56:19.200030 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 05:56:20 crc kubenswrapper[4817]: I0314 05:56:20.241185 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:56:20 crc kubenswrapper[4817]: I0314 05:56:20.282152 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:56:20 crc kubenswrapper[4817]: I0314 05:56:20.426241 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 05:56:20 crc kubenswrapper[4817]: I0314 05:56:20.456545 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 05:56:20 crc kubenswrapper[4817]: I0314 05:56:20.691058 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 05:56:21 crc kubenswrapper[4817]: I0314 05:56:21.208623 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 05:56:21 crc kubenswrapper[4817]: I0314 05:56:21.208685 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 05:56:22 crc kubenswrapper[4817]: I0314 05:56:22.257288 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.186:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:56:22 crc kubenswrapper[4817]: I0314 05:56:22.258021 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.186:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:56:24 crc kubenswrapper[4817]: I0314 05:56:24.046234 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:56:24 crc kubenswrapper[4817]: I0314 05:56:24.106412 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:56:24 crc kubenswrapper[4817]: I0314 05:56:24.305283 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-phgz9"] Mar 14 05:56:25 crc kubenswrapper[4817]: I0314 05:56:25.715844 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-phgz9" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" containerID="cri-o://344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae" gracePeriod=2 Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.194320 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.338226 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-catalog-content\") pod \"fc659724-8af1-4a0d-abbc-e7f1d8190774\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.338472 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjx8m\" (UniqueName: \"kubernetes.io/projected/fc659724-8af1-4a0d-abbc-e7f1d8190774-kube-api-access-pjx8m\") pod \"fc659724-8af1-4a0d-abbc-e7f1d8190774\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.338623 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-utilities\") pod \"fc659724-8af1-4a0d-abbc-e7f1d8190774\" (UID: \"fc659724-8af1-4a0d-abbc-e7f1d8190774\") " Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.340187 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-utilities" (OuterVolumeSpecName: "utilities") pod "fc659724-8af1-4a0d-abbc-e7f1d8190774" (UID: "fc659724-8af1-4a0d-abbc-e7f1d8190774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.346793 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc659724-8af1-4a0d-abbc-e7f1d8190774-kube-api-access-pjx8m" (OuterVolumeSpecName: "kube-api-access-pjx8m") pod "fc659724-8af1-4a0d-abbc-e7f1d8190774" (UID: "fc659724-8af1-4a0d-abbc-e7f1d8190774"). InnerVolumeSpecName "kube-api-access-pjx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.442000 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjx8m\" (UniqueName: \"kubernetes.io/projected/fc659724-8af1-4a0d-abbc-e7f1d8190774-kube-api-access-pjx8m\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.442047 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.527245 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc659724-8af1-4a0d-abbc-e7f1d8190774" (UID: "fc659724-8af1-4a0d-abbc-e7f1d8190774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.544423 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc659724-8af1-4a0d-abbc-e7f1d8190774-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.728566 4817 generic.go:334] "Generic (PLEG): container finished" podID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerID="344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae" exitCode=0 Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.728764 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerDied","Data":"344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae"} Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.728809 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phgz9" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.729121 4817 scope.go:117] "RemoveContainer" containerID="344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.729099 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phgz9" event={"ID":"fc659724-8af1-4a0d-abbc-e7f1d8190774","Type":"ContainerDied","Data":"ef82e09fc7289bbd2ed9a5bb127f4240b9709d4822e97950b4e98d8654b5328f"} Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.775586 4817 scope.go:117] "RemoveContainer" containerID="31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.784581 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-phgz9"] Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.808650 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-phgz9"] Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.812492 4817 scope.go:117] "RemoveContainer" containerID="8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.849085 4817 scope.go:117] "RemoveContainer" containerID="344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae" Mar 14 05:56:26 crc kubenswrapper[4817]: E0314 05:56:26.849602 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae\": container with ID starting with 344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae not found: ID does not exist" containerID="344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.849643 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae"} err="failed to get container status \"344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae\": rpc error: code = NotFound desc = could not find container \"344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae\": container with ID starting with 344c9a28846e48bf1ed79c7596385f997abf38da25daafde091b10ae0a02deae not found: ID does not exist" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.849671 4817 scope.go:117] "RemoveContainer" containerID="31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9" Mar 14 05:56:26 crc kubenswrapper[4817]: E0314 05:56:26.850258 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9\": container with ID starting with 31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9 not found: ID does not exist" containerID="31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.850287 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9"} err="failed to get container status \"31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9\": rpc error: code = NotFound desc = could not find container \"31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9\": container with ID starting with 31ca4a1f1104201dad655d249842df2a5d101209c18e8ff4f20a00373125c8b9 not found: ID does not exist" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.850306 4817 scope.go:117] "RemoveContainer" containerID="8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39" Mar 14 05:56:26 crc kubenswrapper[4817]: E0314 05:56:26.850543 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39\": container with ID starting with 8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39 not found: ID does not exist" containerID="8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39" Mar 14 05:56:26 crc kubenswrapper[4817]: I0314 05:56:26.850568 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39"} err="failed to get container status \"8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39\": rpc error: code = NotFound desc = could not find container \"8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39\": container with ID starting with 8851dd71930edc7c34aa6c5798abd8765a3c1c2021fa78eb661e6b273afa4f39 not found: ID does not exist" Mar 14 05:56:27 crc kubenswrapper[4817]: I0314 05:56:27.200733 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 05:56:27 crc kubenswrapper[4817]: I0314 05:56:27.200844 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 05:56:28 crc kubenswrapper[4817]: I0314 05:56:28.748385 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" path="/var/lib/kubelet/pods/fc659724-8af1-4a0d-abbc-e7f1d8190774/volumes" Mar 14 05:56:29 crc kubenswrapper[4817]: I0314 05:56:29.205733 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 05:56:29 crc kubenswrapper[4817]: I0314 05:56:29.207833 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 05:56:29 crc kubenswrapper[4817]: I0314 05:56:29.207884 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 05:56:29 crc kubenswrapper[4817]: I0314 05:56:29.207916 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 05:56:29 crc kubenswrapper[4817]: I0314 05:56:29.209533 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 05:56:29 crc kubenswrapper[4817]: I0314 05:56:29.785107 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.015305 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zkwzm"] Mar 14 05:56:30 crc kubenswrapper[4817]: E0314 05:56:30.015867 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="extract-utilities" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.015884 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="extract-utilities" Mar 14 05:56:30 crc kubenswrapper[4817]: E0314 05:56:30.015916 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.015926 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" Mar 14 05:56:30 crc kubenswrapper[4817]: E0314 05:56:30.015950 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="extract-content" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.015958 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="extract-content" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.016221 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc659724-8af1-4a0d-abbc-e7f1d8190774" containerName="registry-server" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.018101 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.065063 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zkwzm"] Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.134049 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn4pr\" (UniqueName: \"kubernetes.io/projected/bd3dcc2c-8045-48a2-a884-07978341aef4-kube-api-access-qn4pr\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.134123 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.134187 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.134559 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-config\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.134804 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.236956 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.237047 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.237127 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-config\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.237208 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.237290 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn4pr\" (UniqueName: \"kubernetes.io/projected/bd3dcc2c-8045-48a2-a884-07978341aef4-kube-api-access-qn4pr\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.238593 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.238653 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.239101 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.240658 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-config\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.260642 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn4pr\" (UniqueName: \"kubernetes.io/projected/bd3dcc2c-8045-48a2-a884-07978341aef4-kube-api-access-qn4pr\") pod \"dnsmasq-dns-68d4b6d797-zkwzm\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.350599 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:30 crc kubenswrapper[4817]: E0314 05:56:30.415287 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice\": RecentStats: unable to find data in memory cache]" Mar 14 05:56:30 crc kubenswrapper[4817]: I0314 05:56:30.894411 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zkwzm"] Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.217909 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.222041 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.226712 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.816373 4817 generic.go:334] "Generic (PLEG): container finished" podID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerID="2491350498f8e956dae276a2e6d4e8465e2a798b7f2bef83a15ccec4c2023dda" exitCode=0 Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.816491 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" event={"ID":"bd3dcc2c-8045-48a2-a884-07978341aef4","Type":"ContainerDied","Data":"2491350498f8e956dae276a2e6d4e8465e2a798b7f2bef83a15ccec4c2023dda"} Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.816573 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" event={"ID":"bd3dcc2c-8045-48a2-a884-07978341aef4","Type":"ContainerStarted","Data":"59751bb4e6cf2288ef1728119a4a58c0249d07e64eae762952859530336de65a"} Mar 14 05:56:31 crc kubenswrapper[4817]: I0314 05:56:31.840738 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.441534 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.442276 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-central-agent" containerID="cri-o://daf80c591e01cfbf76691d517685c21bc4b8b2dd87465f9a0f9c5a88f963ffbf" gracePeriod=30 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.442399 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-notification-agent" containerID="cri-o://d54777900c58f892503d6061bc98e3248f356e37d013e9ea3a9bca4eba32c36f" gracePeriod=30 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.442418 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="sg-core" containerID="cri-o://bf49da20746451edbe9ce80cfe9618c2c4f5affb783a0f2d89f90d3c8392a2ad" gracePeriod=30 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.442602 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="proxy-httpd" containerID="cri-o://87f7e3aa8d237609ea2b3e8a2a904d9601f8f477bcd4ebf6ad6d415d0b2131a0" gracePeriod=30 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.630599 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.827981 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" event={"ID":"bd3dcc2c-8045-48a2-a884-07978341aef4","Type":"ContainerStarted","Data":"969a12ead2d02c5597229525ea242f021b097283c3fbfa27a9dfa306f2429620"} Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.828584 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.830946 4817 generic.go:334] "Generic (PLEG): container finished" podID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerID="87f7e3aa8d237609ea2b3e8a2a904d9601f8f477bcd4ebf6ad6d415d0b2131a0" exitCode=0 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.830984 4817 generic.go:334] "Generic (PLEG): container finished" podID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerID="bf49da20746451edbe9ce80cfe9618c2c4f5affb783a0f2d89f90d3c8392a2ad" exitCode=2 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.831634 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerDied","Data":"87f7e3aa8d237609ea2b3e8a2a904d9601f8f477bcd4ebf6ad6d415d0b2131a0"} Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.831671 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerDied","Data":"bf49da20746451edbe9ce80cfe9618c2c4f5affb783a0f2d89f90d3c8392a2ad"} Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.831830 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-log" containerID="cri-o://073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311" gracePeriod=30 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.831992 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-api" containerID="cri-o://1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531" gracePeriod=30 Mar 14 05:56:32 crc kubenswrapper[4817]: I0314 05:56:32.852297 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" podStartSLOduration=3.8522762999999998 podStartE2EDuration="3.8522763s" podCreationTimestamp="2026-03-14 05:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:32.851236881 +0000 UTC m=+1446.889497647" watchObservedRunningTime="2026-03-14 05:56:32.8522763 +0000 UTC m=+1446.890537046" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.809314 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.849626 4817 generic.go:334] "Generic (PLEG): container finished" podID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerID="d54777900c58f892503d6061bc98e3248f356e37d013e9ea3a9bca4eba32c36f" exitCode=0 Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.849666 4817 generic.go:334] "Generic (PLEG): container finished" podID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerID="daf80c591e01cfbf76691d517685c21bc4b8b2dd87465f9a0f9c5a88f963ffbf" exitCode=0 Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.849719 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerDied","Data":"d54777900c58f892503d6061bc98e3248f356e37d013e9ea3a9bca4eba32c36f"} Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.849753 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerDied","Data":"daf80c591e01cfbf76691d517685c21bc4b8b2dd87465f9a0f9c5a88f963ffbf"} Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.853108 4817 generic.go:334] "Generic (PLEG): container finished" podID="a0abdc8b-208d-4494-9946-fde5855b7376" containerID="073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311" exitCode=143 Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.853165 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0abdc8b-208d-4494-9946-fde5855b7376","Type":"ContainerDied","Data":"073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311"} Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.854616 4817 generic.go:334] "Generic (PLEG): container finished" podID="ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" containerID="317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9" exitCode=137 Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.854669 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.854660 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4","Type":"ContainerDied","Data":"317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9"} Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.854729 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4","Type":"ContainerDied","Data":"6605888fa3110ba52aa090389305ad63c42c596c34525361d584fc4d06a748c6"} Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.854754 4817 scope.go:117] "RemoveContainer" containerID="317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.895229 4817 scope.go:117] "RemoveContainer" containerID="317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9" Mar 14 05:56:33 crc kubenswrapper[4817]: E0314 05:56:33.895748 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9\": container with ID starting with 317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9 not found: ID does not exist" containerID="317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.895796 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9"} err="failed to get container status \"317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9\": rpc error: code = NotFound desc = could not find container \"317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9\": container with ID starting with 317a8d86b3f0b5aee5855a05b09ce1ae0d72593f3c161f24ee41739fb30ca8d9 not found: ID does not exist" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.917583 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-config-data\") pod \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.917681 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j28pn\" (UniqueName: \"kubernetes.io/projected/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-kube-api-access-j28pn\") pod \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.918041 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-combined-ca-bundle\") pod \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\" (UID: \"ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4\") " Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.929187 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-kube-api-access-j28pn" (OuterVolumeSpecName: "kube-api-access-j28pn") pod "ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" (UID: "ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4"). InnerVolumeSpecName "kube-api-access-j28pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.961040 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" (UID: "ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:33 crc kubenswrapper[4817]: I0314 05:56:33.961628 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-config-data" (OuterVolumeSpecName: "config-data") pod "ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" (UID: "ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.020539 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.020580 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.020591 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j28pn\" (UniqueName: \"kubernetes.io/projected/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4-kube-api-access-j28pn\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.054666 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122235 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86zl7\" (UniqueName: \"kubernetes.io/projected/ec23a0c2-0437-4585-a6a8-b105a19467be-kube-api-access-86zl7\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122310 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-config-data\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122571 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-scripts\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122717 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-ceilometer-tls-certs\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122767 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-log-httpd\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122864 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-run-httpd\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.122960 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-combined-ca-bundle\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.123031 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-sg-core-conf-yaml\") pod \"ec23a0c2-0437-4585-a6a8-b105a19467be\" (UID: \"ec23a0c2-0437-4585-a6a8-b105a19467be\") " Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.123494 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.124082 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.124311 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.124352 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec23a0c2-0437-4585-a6a8-b105a19467be-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.132254 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec23a0c2-0437-4585-a6a8-b105a19467be-kube-api-access-86zl7" (OuterVolumeSpecName: "kube-api-access-86zl7") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "kube-api-access-86zl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.133215 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-scripts" (OuterVolumeSpecName: "scripts") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.171724 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.206653 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.245220 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.246655 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.246704 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86zl7\" (UniqueName: \"kubernetes.io/projected/ec23a0c2-0437-4585-a6a8-b105a19467be-kube-api-access-86zl7\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.246720 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.246737 4817 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.266671 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.277305 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.291022 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: E0314 05:56:34.291703 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="sg-core" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.291774 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="sg-core" Mar 14 05:56:34 crc kubenswrapper[4817]: E0314 05:56:34.291876 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-notification-agent" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.291990 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-notification-agent" Mar 14 05:56:34 crc kubenswrapper[4817]: E0314 05:56:34.292064 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-central-agent" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292116 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-central-agent" Mar 14 05:56:34 crc kubenswrapper[4817]: E0314 05:56:34.292180 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292230 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 05:56:34 crc kubenswrapper[4817]: E0314 05:56:34.292291 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="proxy-httpd" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292340 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="proxy-httpd" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292681 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-notification-agent" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292776 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="proxy-httpd" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292835 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292911 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="ceilometer-central-agent" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.292970 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" containerName="sg-core" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.294141 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.298817 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.299618 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.300549 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.308058 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.327030 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-config-data" (OuterVolumeSpecName: "config-data") pod "ec23a0c2-0437-4585-a6a8-b105a19467be" (UID: "ec23a0c2-0437-4585-a6a8-b105a19467be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.349781 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.350096 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.350220 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.350315 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.350467 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvxz\" (UniqueName: \"kubernetes.io/projected/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-kube-api-access-sqvxz\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.350634 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.350698 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec23a0c2-0437-4585-a6a8-b105a19467be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.453302 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.453830 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.453861 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.453907 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.453980 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvxz\" (UniqueName: \"kubernetes.io/projected/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-kube-api-access-sqvxz\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.458944 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.459788 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.460059 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.460531 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.472268 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvxz\" (UniqueName: \"kubernetes.io/projected/acdb327b-4e5c-4ea0-bf01-a46f9e0034b0-kube-api-access-sqvxz\") pod \"nova-cell1-novncproxy-0\" (UID: \"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.626059 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.744421 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4" path="/var/lib/kubelet/pods/ea63fa4c-10ea-4fa4-a39e-97f36cde9dd4/volumes" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.870404 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.871259 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec23a0c2-0437-4585-a6a8-b105a19467be","Type":"ContainerDied","Data":"7ff592090378b40a03e1fc43f718d4485c7d18acb0a17aee0ee888fc26e8ded8"} Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.871301 4817 scope.go:117] "RemoveContainer" containerID="87f7e3aa8d237609ea2b3e8a2a904d9601f8f477bcd4ebf6ad6d415d0b2131a0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.922144 4817 scope.go:117] "RemoveContainer" containerID="bf49da20746451edbe9ce80cfe9618c2c4f5affb783a0f2d89f90d3c8392a2ad" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.922178 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.937504 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.945670 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.949152 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.952241 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.952442 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.953271 4817 scope.go:117] "RemoveContainer" containerID="d54777900c58f892503d6061bc98e3248f356e37d013e9ea3a9bca4eba32c36f" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.956683 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.966244 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:56:34 crc kubenswrapper[4817]: I0314 05:56:34.994632 4817 scope.go:117] "RemoveContainer" containerID="daf80c591e01cfbf76691d517685c21bc4b8b2dd87465f9a0f9c5a88f963ffbf" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.067951 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-scripts\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068033 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068080 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068171 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-config-data\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068201 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bltf\" (UniqueName: \"kubernetes.io/projected/35491618-81a4-4f75-927f-6b6a3d0c9ce2-kube-api-access-8bltf\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068242 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-run-httpd\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068285 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.068307 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-log-httpd\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: W0314 05:56:35.143431 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdb327b_4e5c_4ea0_bf01_a46f9e0034b0.slice/crio-adfcf1a56b7b9414d4266d5f822bd14fd3935ff104d558723d07eeb229ce6290 WatchSource:0}: Error finding container adfcf1a56b7b9414d4266d5f822bd14fd3935ff104d558723d07eeb229ce6290: Status 404 returned error can't find the container with id adfcf1a56b7b9414d4266d5f822bd14fd3935ff104d558723d07eeb229ce6290 Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.145780 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172296 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-config-data\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172348 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bltf\" (UniqueName: \"kubernetes.io/projected/35491618-81a4-4f75-927f-6b6a3d0c9ce2-kube-api-access-8bltf\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172386 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-run-httpd\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172414 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172442 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-log-httpd\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172523 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-scripts\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172561 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.172591 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.178803 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.180001 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-log-httpd\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.180697 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-run-httpd\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.200280 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-scripts\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.200388 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-config-data\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.205871 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.215728 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.216978 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bltf\" (UniqueName: \"kubernetes.io/projected/35491618-81a4-4f75-927f-6b6a3d0c9ce2-kube-api-access-8bltf\") pod \"ceilometer-0\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.280343 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.811031 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.886930 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerStarted","Data":"d154b8c623388e15df3f460cc0bfd3cdb78da77369e206c15b943a33117492fc"} Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.889135 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0","Type":"ContainerStarted","Data":"6e597f54b3f3d3559453ca1636ada32ad8e7539ee6614b54fad62c35aa4a46de"} Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.889189 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"acdb327b-4e5c-4ea0-bf01-a46f9e0034b0","Type":"ContainerStarted","Data":"adfcf1a56b7b9414d4266d5f822bd14fd3935ff104d558723d07eeb229ce6290"} Mar 14 05:56:35 crc kubenswrapper[4817]: I0314 05:56:35.916392 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9163710470000002 podStartE2EDuration="1.916371047s" podCreationTimestamp="2026-03-14 05:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:35.90803687 +0000 UTC m=+1449.946297636" watchObservedRunningTime="2026-03-14 05:56:35.916371047 +0000 UTC m=+1449.954631793" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.413371 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.501376 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0abdc8b-208d-4494-9946-fde5855b7376-logs\") pod \"a0abdc8b-208d-4494-9946-fde5855b7376\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.501428 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-combined-ca-bundle\") pod \"a0abdc8b-208d-4494-9946-fde5855b7376\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.501555 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42w2n\" (UniqueName: \"kubernetes.io/projected/a0abdc8b-208d-4494-9946-fde5855b7376-kube-api-access-42w2n\") pod \"a0abdc8b-208d-4494-9946-fde5855b7376\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.501678 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-config-data\") pod \"a0abdc8b-208d-4494-9946-fde5855b7376\" (UID: \"a0abdc8b-208d-4494-9946-fde5855b7376\") " Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.502090 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0abdc8b-208d-4494-9946-fde5855b7376-logs" (OuterVolumeSpecName: "logs") pod "a0abdc8b-208d-4494-9946-fde5855b7376" (UID: "a0abdc8b-208d-4494-9946-fde5855b7376"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.506419 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0abdc8b-208d-4494-9946-fde5855b7376-kube-api-access-42w2n" (OuterVolumeSpecName: "kube-api-access-42w2n") pod "a0abdc8b-208d-4494-9946-fde5855b7376" (UID: "a0abdc8b-208d-4494-9946-fde5855b7376"). InnerVolumeSpecName "kube-api-access-42w2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.537125 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-config-data" (OuterVolumeSpecName: "config-data") pod "a0abdc8b-208d-4494-9946-fde5855b7376" (UID: "a0abdc8b-208d-4494-9946-fde5855b7376"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.537201 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0abdc8b-208d-4494-9946-fde5855b7376" (UID: "a0abdc8b-208d-4494-9946-fde5855b7376"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.604726 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.604783 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0abdc8b-208d-4494-9946-fde5855b7376-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.604797 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0abdc8b-208d-4494-9946-fde5855b7376-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.604813 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42w2n\" (UniqueName: \"kubernetes.io/projected/a0abdc8b-208d-4494-9946-fde5855b7376-kube-api-access-42w2n\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.747035 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec23a0c2-0437-4585-a6a8-b105a19467be" path="/var/lib/kubelet/pods/ec23a0c2-0437-4585-a6a8-b105a19467be/volumes" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.945259 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerStarted","Data":"9998e0aaa0f4549d9f30f501352b9a600999db9deb43bc866d4789909e5ef60c"} Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.949388 4817 generic.go:334] "Generic (PLEG): container finished" podID="a0abdc8b-208d-4494-9946-fde5855b7376" containerID="1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531" exitCode=0 Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.949608 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0abdc8b-208d-4494-9946-fde5855b7376","Type":"ContainerDied","Data":"1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531"} Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.949714 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0abdc8b-208d-4494-9946-fde5855b7376","Type":"ContainerDied","Data":"98df0839723c96e21eb7a5f2fe5c51388e6f60e35481c4412c97964c173e2483"} Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.949841 4817 scope.go:117] "RemoveContainer" containerID="1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.950650 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:36 crc kubenswrapper[4817]: I0314 05:56:36.988272 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.016014 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.017997 4817 scope.go:117] "RemoveContainer" containerID="073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.031169 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:37 crc kubenswrapper[4817]: E0314 05:56:37.031751 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-log" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.031777 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-log" Mar 14 05:56:37 crc kubenswrapper[4817]: E0314 05:56:37.031795 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-api" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.031804 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-api" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.032009 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-api" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.032031 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" containerName="nova-api-log" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.034539 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.040351 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.040421 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.047776 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.059857 4817 scope.go:117] "RemoveContainer" containerID="1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531" Mar 14 05:56:37 crc kubenswrapper[4817]: E0314 05:56:37.070833 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531\": container with ID starting with 1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531 not found: ID does not exist" containerID="1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.070902 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531"} err="failed to get container status \"1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531\": rpc error: code = NotFound desc = could not find container \"1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531\": container with ID starting with 1b12543ed481e57da77abf950f74b27c3f5de4e7db50da3be58e4ee8c595a531 not found: ID does not exist" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.070938 4817 scope.go:117] "RemoveContainer" containerID="073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311" Mar 14 05:56:37 crc kubenswrapper[4817]: E0314 05:56:37.072037 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311\": container with ID starting with 073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311 not found: ID does not exist" containerID="073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.072075 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311"} err="failed to get container status \"073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311\": rpc error: code = NotFound desc = could not find container \"073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311\": container with ID starting with 073237a9d6ad2c652df4aa1c726b87de2dd99125cb850306cad6779f1f8cf311 not found: ID does not exist" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.081070 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.121536 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-public-tls-certs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.121633 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.121656 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cz2\" (UniqueName: \"kubernetes.io/projected/f75824d0-46b7-44e2-8682-b2bf5158a240-kube-api-access-66cz2\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.122006 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f75824d0-46b7-44e2-8682-b2bf5158a240-logs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.122165 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.122219 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-config-data\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.223578 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-config-data\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.224244 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-public-tls-certs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.224313 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.224333 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cz2\" (UniqueName: \"kubernetes.io/projected/f75824d0-46b7-44e2-8682-b2bf5158a240-kube-api-access-66cz2\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.224388 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f75824d0-46b7-44e2-8682-b2bf5158a240-logs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.224423 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.224967 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f75824d0-46b7-44e2-8682-b2bf5158a240-logs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.228640 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.229000 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.232487 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-public-tls-certs\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.233016 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-config-data\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.243123 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cz2\" (UniqueName: \"kubernetes.io/projected/f75824d0-46b7-44e2-8682-b2bf5158a240-kube-api-access-66cz2\") pod \"nova-api-0\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.387875 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.931418 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:37 crc kubenswrapper[4817]: W0314 05:56:37.943961 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf75824d0_46b7_44e2_8682_b2bf5158a240.slice/crio-c1807c1a517967daee92168ad47c1a329644ceb8fb47e7cc181c6f209edbee3f WatchSource:0}: Error finding container c1807c1a517967daee92168ad47c1a329644ceb8fb47e7cc181c6f209edbee3f: Status 404 returned error can't find the container with id c1807c1a517967daee92168ad47c1a329644ceb8fb47e7cc181c6f209edbee3f Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.970931 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerStarted","Data":"4c588abde7ecaea57d824b62ede42d3b5433ccc584206a55a3b15187c891a59a"} Mar 14 05:56:37 crc kubenswrapper[4817]: I0314 05:56:37.973652 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f75824d0-46b7-44e2-8682-b2bf5158a240","Type":"ContainerStarted","Data":"c1807c1a517967daee92168ad47c1a329644ceb8fb47e7cc181c6f209edbee3f"} Mar 14 05:56:38 crc kubenswrapper[4817]: I0314 05:56:38.566356 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:56:38 crc kubenswrapper[4817]: I0314 05:56:38.566832 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:56:38 crc kubenswrapper[4817]: I0314 05:56:38.857177 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0abdc8b-208d-4494-9946-fde5855b7376" path="/var/lib/kubelet/pods/a0abdc8b-208d-4494-9946-fde5855b7376/volumes" Mar 14 05:56:38 crc kubenswrapper[4817]: I0314 05:56:38.987646 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerStarted","Data":"15fab14509bd8b37d8649d0d4e30717dbe50bc917695e764a7e241d22cb0cff3"} Mar 14 05:56:38 crc kubenswrapper[4817]: I0314 05:56:38.989298 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f75824d0-46b7-44e2-8682-b2bf5158a240","Type":"ContainerStarted","Data":"228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed"} Mar 14 05:56:38 crc kubenswrapper[4817]: I0314 05:56:38.989325 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f75824d0-46b7-44e2-8682-b2bf5158a240","Type":"ContainerStarted","Data":"6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e"} Mar 14 05:56:39 crc kubenswrapper[4817]: I0314 05:56:39.010278 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.010251992 podStartE2EDuration="3.010251992s" podCreationTimestamp="2026-03-14 05:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:39.007666928 +0000 UTC m=+1453.045927694" watchObservedRunningTime="2026-03-14 05:56:39.010251992 +0000 UTC m=+1453.048512738" Mar 14 05:56:39 crc kubenswrapper[4817]: I0314 05:56:39.626757 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:40 crc kubenswrapper[4817]: I0314 05:56:40.353751 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:56:40 crc kubenswrapper[4817]: I0314 05:56:40.467850 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-szcrw"] Mar 14 05:56:40 crc kubenswrapper[4817]: I0314 05:56:40.468193 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerName="dnsmasq-dns" containerID="cri-o://3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325" gracePeriod=10 Mar 14 05:56:40 crc kubenswrapper[4817]: I0314 05:56:40.659156 4817 scope.go:117] "RemoveContainer" containerID="d02d971a1d9cc72a695ffa5496c50a92a43fd9b9c3e5064811cea5c3fe1396d6" Mar 14 05:56:40 crc kubenswrapper[4817]: E0314 05:56:40.721144 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc70e40_3ca0_4fe5_b75b_5cea7c86f91c.slice/crio-conmon-3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc70e40_3ca0_4fe5_b75b_5cea7c86f91c.slice/crio-3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325.scope\": RecentStats: unable to find data in memory cache]" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.012037 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.013801 4817 generic.go:334] "Generic (PLEG): container finished" podID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerID="3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325" exitCode=0 Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.014426 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" event={"ID":"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c","Type":"ContainerDied","Data":"3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325"} Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.014475 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" event={"ID":"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c","Type":"ContainerDied","Data":"c179fefe1655de4571b81b0d7024f0aa467771117db22e41bb7b47d1a4d1edc9"} Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.014500 4817 scope.go:117] "RemoveContainer" containerID="3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.021294 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerStarted","Data":"203c4234bf50ffef3cbcd5c20d318313984a98c3213c7960188cd537798bba74"} Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.022489 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.097923 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.895996056 podStartE2EDuration="7.097884938s" podCreationTimestamp="2026-03-14 05:56:34 +0000 UTC" firstStartedPulling="2026-03-14 05:56:35.811677385 +0000 UTC m=+1449.849938131" lastFinishedPulling="2026-03-14 05:56:40.013566267 +0000 UTC m=+1454.051827013" observedRunningTime="2026-03-14 05:56:41.093496573 +0000 UTC m=+1455.131757329" watchObservedRunningTime="2026-03-14 05:56:41.097884938 +0000 UTC m=+1455.136145684" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.108400 4817 scope.go:117] "RemoveContainer" containerID="77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.121414 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-nb\") pod \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.121548 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb\") pod \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.121584 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-config\") pod \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.121662 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tfbf\" (UniqueName: \"kubernetes.io/projected/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-kube-api-access-9tfbf\") pod \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.121684 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-dns-svc\") pod \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.144157 4817 scope.go:117] "RemoveContainer" containerID="3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325" Mar 14 05:56:41 crc kubenswrapper[4817]: E0314 05:56:41.145367 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325\": container with ID starting with 3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325 not found: ID does not exist" containerID="3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.145413 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325"} err="failed to get container status \"3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325\": rpc error: code = NotFound desc = could not find container \"3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325\": container with ID starting with 3bac0792a9b484dad1944b47f885ed34ccdede171ec5f3963ededf1d468a6325 not found: ID does not exist" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.145440 4817 scope.go:117] "RemoveContainer" containerID="77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b" Mar 14 05:56:41 crc kubenswrapper[4817]: E0314 05:56:41.146079 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b\": container with ID starting with 77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b not found: ID does not exist" containerID="77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.146130 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b"} err="failed to get container status \"77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b\": rpc error: code = NotFound desc = could not find container \"77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b\": container with ID starting with 77a4df191e51b4ed5afe37f74ae2747fc4736052c7d8c76a8f76b35101b3153b not found: ID does not exist" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.154333 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-kube-api-access-9tfbf" (OuterVolumeSpecName: "kube-api-access-9tfbf") pod "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" (UID: "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c"). InnerVolumeSpecName "kube-api-access-9tfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.197551 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" (UID: "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:41 crc kubenswrapper[4817]: E0314 05:56:41.216402 4817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb podName:2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c nodeName:}" failed. No retries permitted until 2026-03-14 05:56:41.716361493 +0000 UTC m=+1455.754622229 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb") pod "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" (UID: "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c") : error deleting /var/lib/kubelet/pods/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c/volume-subpaths: remove /var/lib/kubelet/pods/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c/volume-subpaths: no such file or directory Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.216841 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" (UID: "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.216878 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-config" (OuterVolumeSpecName: "config") pod "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" (UID: "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.224885 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.224947 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tfbf\" (UniqueName: \"kubernetes.io/projected/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-kube-api-access-9tfbf\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.224959 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.224970 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.734464 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb\") pod \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\" (UID: \"2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c\") " Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.735215 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" (UID: "2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:56:41 crc kubenswrapper[4817]: I0314 05:56:41.735321 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:42 crc kubenswrapper[4817]: I0314 05:56:42.033442 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-szcrw" Mar 14 05:56:42 crc kubenswrapper[4817]: I0314 05:56:42.083977 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-szcrw"] Mar 14 05:56:42 crc kubenswrapper[4817]: I0314 05:56:42.090284 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-szcrw"] Mar 14 05:56:42 crc kubenswrapper[4817]: I0314 05:56:42.747069 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" path="/var/lib/kubelet/pods/2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c/volumes" Mar 14 05:56:44 crc kubenswrapper[4817]: I0314 05:56:44.626717 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:44 crc kubenswrapper[4817]: I0314 05:56:44.648710 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.116619 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.481206 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sz6pp"] Mar 14 05:56:45 crc kubenswrapper[4817]: E0314 05:56:45.482112 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerName="dnsmasq-dns" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.482135 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerName="dnsmasq-dns" Mar 14 05:56:45 crc kubenswrapper[4817]: E0314 05:56:45.482148 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerName="init" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.482155 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerName="init" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.482355 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc70e40-3ca0-4fe5-b75b-5cea7c86f91c" containerName="dnsmasq-dns" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.483133 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.486819 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.488105 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.499880 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sz6pp"] Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.519747 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-config-data\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.519824 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-scripts\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.519846 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.519938 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq7b9\" (UniqueName: \"kubernetes.io/projected/d45463be-ecf1-4c50-a812-29dd2e00dffe-kube-api-access-fq7b9\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.621811 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-scripts\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.621862 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.621948 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq7b9\" (UniqueName: \"kubernetes.io/projected/d45463be-ecf1-4c50-a812-29dd2e00dffe-kube-api-access-fq7b9\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.622018 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-config-data\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.632953 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-config-data\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.636754 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-scripts\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.643527 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.644888 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq7b9\" (UniqueName: \"kubernetes.io/projected/d45463be-ecf1-4c50-a812-29dd2e00dffe-kube-api-access-fq7b9\") pod \"nova-cell1-cell-mapping-sz6pp\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:45 crc kubenswrapper[4817]: I0314 05:56:45.804465 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:46 crc kubenswrapper[4817]: I0314 05:56:46.318581 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sz6pp"] Mar 14 05:56:47 crc kubenswrapper[4817]: I0314 05:56:47.114732 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sz6pp" event={"ID":"d45463be-ecf1-4c50-a812-29dd2e00dffe","Type":"ContainerStarted","Data":"a462159a690487218798a332fdb6a0ebb6a9ce79a08931c5268282f010b8478d"} Mar 14 05:56:47 crc kubenswrapper[4817]: I0314 05:56:47.115379 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sz6pp" event={"ID":"d45463be-ecf1-4c50-a812-29dd2e00dffe","Type":"ContainerStarted","Data":"90e9afee49e5e45e8502f6622eb886ff3ec524e210fcb9dde97a538629804d76"} Mar 14 05:56:47 crc kubenswrapper[4817]: I0314 05:56:47.161333 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sz6pp" podStartSLOduration=2.161305938 podStartE2EDuration="2.161305938s" podCreationTimestamp="2026-03-14 05:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:56:47.159770564 +0000 UTC m=+1461.198031340" watchObservedRunningTime="2026-03-14 05:56:47.161305938 +0000 UTC m=+1461.199566694" Mar 14 05:56:47 crc kubenswrapper[4817]: I0314 05:56:47.392485 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 05:56:47 crc kubenswrapper[4817]: I0314 05:56:47.394178 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 05:56:48 crc kubenswrapper[4817]: I0314 05:56:48.430356 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:56:48 crc kubenswrapper[4817]: I0314 05:56:48.430403 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:56:50 crc kubenswrapper[4817]: E0314 05:56:50.988534 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice\": RecentStats: unable to find data in memory cache]" Mar 14 05:56:53 crc kubenswrapper[4817]: I0314 05:56:53.177873 4817 generic.go:334] "Generic (PLEG): container finished" podID="d45463be-ecf1-4c50-a812-29dd2e00dffe" containerID="a462159a690487218798a332fdb6a0ebb6a9ce79a08931c5268282f010b8478d" exitCode=0 Mar 14 05:56:53 crc kubenswrapper[4817]: I0314 05:56:53.177979 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sz6pp" event={"ID":"d45463be-ecf1-4c50-a812-29dd2e00dffe","Type":"ContainerDied","Data":"a462159a690487218798a332fdb6a0ebb6a9ce79a08931c5268282f010b8478d"} Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.546024 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.731596 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq7b9\" (UniqueName: \"kubernetes.io/projected/d45463be-ecf1-4c50-a812-29dd2e00dffe-kube-api-access-fq7b9\") pod \"d45463be-ecf1-4c50-a812-29dd2e00dffe\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.731883 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-scripts\") pod \"d45463be-ecf1-4c50-a812-29dd2e00dffe\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.732053 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-combined-ca-bundle\") pod \"d45463be-ecf1-4c50-a812-29dd2e00dffe\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.732291 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-config-data\") pod \"d45463be-ecf1-4c50-a812-29dd2e00dffe\" (UID: \"d45463be-ecf1-4c50-a812-29dd2e00dffe\") " Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.740163 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-scripts" (OuterVolumeSpecName: "scripts") pod "d45463be-ecf1-4c50-a812-29dd2e00dffe" (UID: "d45463be-ecf1-4c50-a812-29dd2e00dffe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.741665 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45463be-ecf1-4c50-a812-29dd2e00dffe-kube-api-access-fq7b9" (OuterVolumeSpecName: "kube-api-access-fq7b9") pod "d45463be-ecf1-4c50-a812-29dd2e00dffe" (UID: "d45463be-ecf1-4c50-a812-29dd2e00dffe"). InnerVolumeSpecName "kube-api-access-fq7b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.762645 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d45463be-ecf1-4c50-a812-29dd2e00dffe" (UID: "d45463be-ecf1-4c50-a812-29dd2e00dffe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.770233 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-config-data" (OuterVolumeSpecName: "config-data") pod "d45463be-ecf1-4c50-a812-29dd2e00dffe" (UID: "d45463be-ecf1-4c50-a812-29dd2e00dffe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.834554 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.834590 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.834600 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq7b9\" (UniqueName: \"kubernetes.io/projected/d45463be-ecf1-4c50-a812-29dd2e00dffe-kube-api-access-fq7b9\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:54 crc kubenswrapper[4817]: I0314 05:56:54.834612 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d45463be-ecf1-4c50-a812-29dd2e00dffe-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.204619 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sz6pp" event={"ID":"d45463be-ecf1-4c50-a812-29dd2e00dffe","Type":"ContainerDied","Data":"90e9afee49e5e45e8502f6622eb886ff3ec524e210fcb9dde97a538629804d76"} Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.204673 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e9afee49e5e45e8502f6622eb886ff3ec524e210fcb9dde97a538629804d76" Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.204690 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sz6pp" Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.388995 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.389072 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.440822 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.481949 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.482299 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" containerName="nova-scheduler-scheduler" containerID="cri-o://1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" gracePeriod=30 Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.551562 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.552066 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-metadata" containerID="cri-o://22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32" gracePeriod=30 Mar 14 05:56:55 crc kubenswrapper[4817]: I0314 05:56:55.551871 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-log" containerID="cri-o://16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb" gracePeriod=30 Mar 14 05:56:56 crc kubenswrapper[4817]: I0314 05:56:56.226258 4817 generic.go:334] "Generic (PLEG): container finished" podID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerID="16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb" exitCode=143 Mar 14 05:56:56 crc kubenswrapper[4817]: I0314 05:56:56.226641 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-log" containerID="cri-o://6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e" gracePeriod=30 Mar 14 05:56:56 crc kubenswrapper[4817]: I0314 05:56:56.226731 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c759869-bcda-40b9-83b4-ba6d9f53f57d","Type":"ContainerDied","Data":"16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb"} Mar 14 05:56:56 crc kubenswrapper[4817]: I0314 05:56:56.226874 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-api" containerID="cri-o://228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed" gracePeriod=30 Mar 14 05:56:57 crc kubenswrapper[4817]: I0314 05:56:57.242849 4817 generic.go:334] "Generic (PLEG): container finished" podID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerID="6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e" exitCode=143 Mar 14 05:56:57 crc kubenswrapper[4817]: I0314 05:56:57.242986 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f75824d0-46b7-44e2-8682-b2bf5158a240","Type":"ContainerDied","Data":"6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e"} Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.230950 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.277479 4817 generic.go:334] "Generic (PLEG): container finished" podID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerID="22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32" exitCode=0 Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.277554 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c759869-bcda-40b9-83b4-ba6d9f53f57d","Type":"ContainerDied","Data":"22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32"} Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.277601 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8c759869-bcda-40b9-83b4-ba6d9f53f57d","Type":"ContainerDied","Data":"0ce467c772b6faeda391d2b7731bfb619f662ac05da7e2f3f8e8c4ab6f592e7e"} Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.277639 4817 scope.go:117] "RemoveContainer" containerID="22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.277649 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.317603 4817 scope.go:117] "RemoveContainer" containerID="16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.334578 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-nova-metadata-tls-certs\") pod \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.336198 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-combined-ca-bundle\") pod \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.336407 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c759869-bcda-40b9-83b4-ba6d9f53f57d-logs\") pod \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.336709 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfx8c\" (UniqueName: \"kubernetes.io/projected/8c759869-bcda-40b9-83b4-ba6d9f53f57d-kube-api-access-nfx8c\") pod \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.337078 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-config-data\") pod \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\" (UID: \"8c759869-bcda-40b9-83b4-ba6d9f53f57d\") " Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.337355 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c759869-bcda-40b9-83b4-ba6d9f53f57d-logs" (OuterVolumeSpecName: "logs") pod "8c759869-bcda-40b9-83b4-ba6d9f53f57d" (UID: "8c759869-bcda-40b9-83b4-ba6d9f53f57d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.338309 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c759869-bcda-40b9-83b4-ba6d9f53f57d-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.350282 4817 scope.go:117] "RemoveContainer" containerID="22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.350339 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c759869-bcda-40b9-83b4-ba6d9f53f57d-kube-api-access-nfx8c" (OuterVolumeSpecName: "kube-api-access-nfx8c") pod "8c759869-bcda-40b9-83b4-ba6d9f53f57d" (UID: "8c759869-bcda-40b9-83b4-ba6d9f53f57d"). InnerVolumeSpecName "kube-api-access-nfx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:56:59 crc kubenswrapper[4817]: E0314 05:56:59.351128 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32\": container with ID starting with 22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32 not found: ID does not exist" containerID="22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.351174 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32"} err="failed to get container status \"22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32\": rpc error: code = NotFound desc = could not find container \"22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32\": container with ID starting with 22e64b030784ea1ef93bd5ea3d01461e138437efc2ce9b8c7c94d51306a82d32 not found: ID does not exist" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.351203 4817 scope.go:117] "RemoveContainer" containerID="16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb" Mar 14 05:56:59 crc kubenswrapper[4817]: E0314 05:56:59.351748 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb\": container with ID starting with 16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb not found: ID does not exist" containerID="16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.351802 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb"} err="failed to get container status \"16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb\": rpc error: code = NotFound desc = could not find container \"16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb\": container with ID starting with 16b2a22b120a5e4f62f873ec61b647183db7f1776d92ea4ea05871e5de59e4fb not found: ID does not exist" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.368778 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-config-data" (OuterVolumeSpecName: "config-data") pod "8c759869-bcda-40b9-83b4-ba6d9f53f57d" (UID: "8c759869-bcda-40b9-83b4-ba6d9f53f57d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.389233 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8c759869-bcda-40b9-83b4-ba6d9f53f57d" (UID: "8c759869-bcda-40b9-83b4-ba6d9f53f57d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.395061 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c759869-bcda-40b9-83b4-ba6d9f53f57d" (UID: "8c759869-bcda-40b9-83b4-ba6d9f53f57d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.441013 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.441541 4817 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.441556 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c759869-bcda-40b9-83b4-ba6d9f53f57d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.441570 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfx8c\" (UniqueName: \"kubernetes.io/projected/8c759869-bcda-40b9-83b4-ba6d9f53f57d-kube-api-access-nfx8c\") on node \"crc\" DevicePath \"\"" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.615853 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.637607 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.688011 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:59 crc kubenswrapper[4817]: E0314 05:56:59.690427 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-metadata" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.690459 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-metadata" Mar 14 05:56:59 crc kubenswrapper[4817]: E0314 05:56:59.690486 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-log" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.690493 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-log" Mar 14 05:56:59 crc kubenswrapper[4817]: E0314 05:56:59.690503 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45463be-ecf1-4c50-a812-29dd2e00dffe" containerName="nova-manage" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.690510 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45463be-ecf1-4c50-a812-29dd2e00dffe" containerName="nova-manage" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.690921 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-metadata" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.690948 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45463be-ecf1-4c50-a812-29dd2e00dffe" containerName="nova-manage" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.690965 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" containerName="nova-metadata-log" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.692567 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.696325 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.696591 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.704702 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.786967 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.787095 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4pq\" (UniqueName: \"kubernetes.io/projected/d905ec00-43c0-4f8b-a52d-414b74697fb2-kube-api-access-vq4pq\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.787128 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-config-data\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.787241 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.787374 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d905ec00-43c0-4f8b-a52d-414b74697fb2-logs\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.890211 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq4pq\" (UniqueName: \"kubernetes.io/projected/d905ec00-43c0-4f8b-a52d-414b74697fb2-kube-api-access-vq4pq\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.890296 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-config-data\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.890401 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.890609 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d905ec00-43c0-4f8b-a52d-414b74697fb2-logs\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.890687 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.891538 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d905ec00-43c0-4f8b-a52d-414b74697fb2-logs\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.897059 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.897589 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-config-data\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.899168 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d905ec00-43c0-4f8b-a52d-414b74697fb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:56:59 crc kubenswrapper[4817]: I0314 05:56:59.918516 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq4pq\" (UniqueName: \"kubernetes.io/projected/d905ec00-43c0-4f8b-a52d-414b74697fb2-kube-api-access-vq4pq\") pod \"nova-metadata-0\" (UID: \"d905ec00-43c0-4f8b-a52d-414b74697fb2\") " pod="openstack/nova-metadata-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.023707 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.213920 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.293303 4817 generic.go:334] "Generic (PLEG): container finished" podID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerID="228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed" exitCode=0 Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.293378 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f75824d0-46b7-44e2-8682-b2bf5158a240","Type":"ContainerDied","Data":"228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed"} Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.293412 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f75824d0-46b7-44e2-8682-b2bf5158a240","Type":"ContainerDied","Data":"c1807c1a517967daee92168ad47c1a329644ceb8fb47e7cc181c6f209edbee3f"} Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.293433 4817 scope.go:117] "RemoveContainer" containerID="228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.293604 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.299026 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-internal-tls-certs\") pod \"f75824d0-46b7-44e2-8682-b2bf5158a240\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.299084 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66cz2\" (UniqueName: \"kubernetes.io/projected/f75824d0-46b7-44e2-8682-b2bf5158a240-kube-api-access-66cz2\") pod \"f75824d0-46b7-44e2-8682-b2bf5158a240\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.299182 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f75824d0-46b7-44e2-8682-b2bf5158a240-logs\") pod \"f75824d0-46b7-44e2-8682-b2bf5158a240\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.299267 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-combined-ca-bundle\") pod \"f75824d0-46b7-44e2-8682-b2bf5158a240\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.299423 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-config-data\") pod \"f75824d0-46b7-44e2-8682-b2bf5158a240\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.299579 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-public-tls-certs\") pod \"f75824d0-46b7-44e2-8682-b2bf5158a240\" (UID: \"f75824d0-46b7-44e2-8682-b2bf5158a240\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.300654 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75824d0-46b7-44e2-8682-b2bf5158a240-logs" (OuterVolumeSpecName: "logs") pod "f75824d0-46b7-44e2-8682-b2bf5158a240" (UID: "f75824d0-46b7-44e2-8682-b2bf5158a240"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.309380 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75824d0-46b7-44e2-8682-b2bf5158a240-kube-api-access-66cz2" (OuterVolumeSpecName: "kube-api-access-66cz2") pod "f75824d0-46b7-44e2-8682-b2bf5158a240" (UID: "f75824d0-46b7-44e2-8682-b2bf5158a240"). InnerVolumeSpecName "kube-api-access-66cz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.330186 4817 scope.go:117] "RemoveContainer" containerID="6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.335055 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-config-data" (OuterVolumeSpecName: "config-data") pod "f75824d0-46b7-44e2-8682-b2bf5158a240" (UID: "f75824d0-46b7-44e2-8682-b2bf5158a240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.344608 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f75824d0-46b7-44e2-8682-b2bf5158a240" (UID: "f75824d0-46b7-44e2-8682-b2bf5158a240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.355624 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f75824d0-46b7-44e2-8682-b2bf5158a240" (UID: "f75824d0-46b7-44e2-8682-b2bf5158a240"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.363121 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f75824d0-46b7-44e2-8682-b2bf5158a240" (UID: "f75824d0-46b7-44e2-8682-b2bf5158a240"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.387820 4817 scope.go:117] "RemoveContainer" containerID="228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed" Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.388363 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed\": container with ID starting with 228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed not found: ID does not exist" containerID="228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.388408 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed"} err="failed to get container status \"228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed\": rpc error: code = NotFound desc = could not find container \"228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed\": container with ID starting with 228d7170bd9cc622a8e5a0267f2f1fccee561ebd0e3c089eba92dab39503afed not found: ID does not exist" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.388433 4817 scope.go:117] "RemoveContainer" containerID="6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e" Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.388874 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e\": container with ID starting with 6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e not found: ID does not exist" containerID="6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.388912 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e"} err="failed to get container status \"6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e\": rpc error: code = NotFound desc = could not find container \"6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e\": container with ID starting with 6fe41825a261b1de3e3898e965cd01c70e44bef71f1c5e47af07b1f1908c777e not found: ID does not exist" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.402549 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.402586 4817 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.402600 4817 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.402610 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66cz2\" (UniqueName: \"kubernetes.io/projected/f75824d0-46b7-44e2-8682-b2bf5158a240-kube-api-access-66cz2\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.402621 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f75824d0-46b7-44e2-8682-b2bf5158a240-logs\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.402631 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75824d0-46b7-44e2-8682-b2bf5158a240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.428145 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c is running failed: container process not found" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.429025 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c is running failed: container process not found" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.429771 4817 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c is running failed: container process not found" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.429823 4817 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" containerName="nova-scheduler-scheduler" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.565249 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 05:57:00 crc kubenswrapper[4817]: W0314 05:57:00.586318 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd905ec00_43c0_4f8b_a52d_414b74697fb2.slice/crio-44c51a6da33359664c5722474ed42357a89fa3e6c4ca89dc19af043cc5f77a58 WatchSource:0}: Error finding container 44c51a6da33359664c5722474ed42357a89fa3e6c4ca89dc19af043cc5f77a58: Status 404 returned error can't find the container with id 44c51a6da33359664c5722474ed42357a89fa3e6c4ca89dc19af043cc5f77a58 Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.635964 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.648921 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.676070 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.676555 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-log" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.676569 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-log" Mar 14 05:57:00 crc kubenswrapper[4817]: E0314 05:57:00.676598 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-api" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.676604 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-api" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.676793 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-api" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.676812 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" containerName="nova-api-log" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.677908 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.684438 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.685763 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.686027 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.697181 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.743002 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.743487 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c759869-bcda-40b9-83b4-ba6d9f53f57d" path="/var/lib/kubelet/pods/8c759869-bcda-40b9-83b4-ba6d9f53f57d/volumes" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.744186 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75824d0-46b7-44e2-8682-b2bf5158a240" path="/var/lib/kubelet/pods/f75824d0-46b7-44e2-8682-b2bf5158a240/volumes" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.816675 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4p9w\" (UniqueName: \"kubernetes.io/projected/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-kube-api-access-t4p9w\") pod \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.816732 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-combined-ca-bundle\") pod \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.816759 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-config-data\") pod \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\" (UID: \"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636\") " Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.817222 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzmd\" (UniqueName: \"kubernetes.io/projected/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-kube-api-access-lqzmd\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.817282 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-config-data\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.817307 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.817378 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-logs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.817435 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.817458 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.822197 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-kube-api-access-t4p9w" (OuterVolumeSpecName: "kube-api-access-t4p9w") pod "d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" (UID: "d3e0dd1c-81ba-40c9-8db8-7d6d79d33636"). InnerVolumeSpecName "kube-api-access-t4p9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.846164 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" (UID: "d3e0dd1c-81ba-40c9-8db8-7d6d79d33636"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.849163 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-config-data" (OuterVolumeSpecName: "config-data") pod "d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" (UID: "d3e0dd1c-81ba-40c9-8db8-7d6d79d33636"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.919729 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.919870 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-logs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.919947 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.919975 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.920033 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzmd\" (UniqueName: \"kubernetes.io/projected/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-kube-api-access-lqzmd\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.920091 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-config-data\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.920156 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4p9w\" (UniqueName: \"kubernetes.io/projected/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-kube-api-access-t4p9w\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.920171 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.920183 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.924314 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.925397 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-config-data\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.926083 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-logs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.926862 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.928003 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:00 crc kubenswrapper[4817]: I0314 05:57:00.942699 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzmd\" (UniqueName: \"kubernetes.io/projected/fd4cb5c8-0bac-4213-bc4d-42805d1b03f7-kube-api-access-lqzmd\") pod \"nova-api-0\" (UID: \"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7\") " pod="openstack/nova-api-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.018639 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 05:57:01 crc kubenswrapper[4817]: E0314 05:57:01.279008 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice\": RecentStats: unable to find data in memory cache]" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.310137 4817 generic.go:334] "Generic (PLEG): container finished" podID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" exitCode=0 Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.310206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636","Type":"ContainerDied","Data":"1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c"} Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.310262 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3e0dd1c-81ba-40c9-8db8-7d6d79d33636","Type":"ContainerDied","Data":"b7503f344eb7931ee0d486dcffd4fb4145319a7fccbce300dd020caf410cd55a"} Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.310282 4817 scope.go:117] "RemoveContainer" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.310220 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.317751 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d905ec00-43c0-4f8b-a52d-414b74697fb2","Type":"ContainerStarted","Data":"0fb726f81ff2acde0fedf1138958eaece51ab3bdb089e3c7ec57139fe91423fa"} Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.317794 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d905ec00-43c0-4f8b-a52d-414b74697fb2","Type":"ContainerStarted","Data":"48a380aa9e65a3e7e220912026174c685cf4c614ecbab9aa887315d7b829e853"} Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.317808 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d905ec00-43c0-4f8b-a52d-414b74697fb2","Type":"ContainerStarted","Data":"44c51a6da33359664c5722474ed42357a89fa3e6c4ca89dc19af043cc5f77a58"} Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.375714 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.375687319 podStartE2EDuration="2.375687319s" podCreationTimestamp="2026-03-14 05:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:57:01.367397473 +0000 UTC m=+1475.405658239" watchObservedRunningTime="2026-03-14 05:57:01.375687319 +0000 UTC m=+1475.413948085" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.380457 4817 scope.go:117] "RemoveContainer" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" Mar 14 05:57:01 crc kubenswrapper[4817]: E0314 05:57:01.388200 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c\": container with ID starting with 1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c not found: ID does not exist" containerID="1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.388455 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c"} err="failed to get container status \"1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c\": rpc error: code = NotFound desc = could not find container \"1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c\": container with ID starting with 1b61dd8be66a13cf823abcd3fe4184ee0178aba2a76ad9343fc99de7fb27235c not found: ID does not exist" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.429772 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.440010 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.449280 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:57:01 crc kubenswrapper[4817]: E0314 05:57:01.449912 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" containerName="nova-scheduler-scheduler" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.449938 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" containerName="nova-scheduler-scheduler" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.450144 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" containerName="nova-scheduler-scheduler" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.450937 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.467835 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.493028 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.524619 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.532056 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fa238-9d14-4be7-ae7b-0b3eb077d575-config-data\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.532103 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27zch\" (UniqueName: \"kubernetes.io/projected/e65fa238-9d14-4be7-ae7b-0b3eb077d575-kube-api-access-27zch\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.532181 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fa238-9d14-4be7-ae7b-0b3eb077d575-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.633964 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fa238-9d14-4be7-ae7b-0b3eb077d575-config-data\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.634124 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27zch\" (UniqueName: \"kubernetes.io/projected/e65fa238-9d14-4be7-ae7b-0b3eb077d575-kube-api-access-27zch\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.634239 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fa238-9d14-4be7-ae7b-0b3eb077d575-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.641077 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fa238-9d14-4be7-ae7b-0b3eb077d575-config-data\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.641175 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fa238-9d14-4be7-ae7b-0b3eb077d575-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.656155 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27zch\" (UniqueName: \"kubernetes.io/projected/e65fa238-9d14-4be7-ae7b-0b3eb077d575-kube-api-access-27zch\") pod \"nova-scheduler-0\" (UID: \"e65fa238-9d14-4be7-ae7b-0b3eb077d575\") " pod="openstack/nova-scheduler-0" Mar 14 05:57:01 crc kubenswrapper[4817]: I0314 05:57:01.786162 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.276799 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 05:57:02 crc kubenswrapper[4817]: W0314 05:57:02.297535 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode65fa238_9d14_4be7_ae7b_0b3eb077d575.slice/crio-58d0eb6d5db810e58bcb962a4b128fad9127ee8d7229aaba1e1a9dc76a36c48d WatchSource:0}: Error finding container 58d0eb6d5db810e58bcb962a4b128fad9127ee8d7229aaba1e1a9dc76a36c48d: Status 404 returned error can't find the container with id 58d0eb6d5db810e58bcb962a4b128fad9127ee8d7229aaba1e1a9dc76a36c48d Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.330542 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7","Type":"ContainerStarted","Data":"ad28f1c5be8121cf6c08d04dcf7214a8874b4fd5e9012c3d265258b30dae9303"} Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.331033 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7","Type":"ContainerStarted","Data":"f5a6721616f3d4c3cdaa8127568cc589ea1583090b442c907e89bd09247cff7d"} Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.331051 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4cb5c8-0bac-4213-bc4d-42805d1b03f7","Type":"ContainerStarted","Data":"95f65a5497c1e45aa17c6a78c6e7b3b3267b0b3bdb33e16953e30c1cf748d72a"} Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.335536 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e65fa238-9d14-4be7-ae7b-0b3eb077d575","Type":"ContainerStarted","Data":"58d0eb6d5db810e58bcb962a4b128fad9127ee8d7229aaba1e1a9dc76a36c48d"} Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.365030 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.365007354 podStartE2EDuration="2.365007354s" podCreationTimestamp="2026-03-14 05:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:57:02.357161631 +0000 UTC m=+1476.395422447" watchObservedRunningTime="2026-03-14 05:57:02.365007354 +0000 UTC m=+1476.403268100" Mar 14 05:57:02 crc kubenswrapper[4817]: I0314 05:57:02.753997 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e0dd1c-81ba-40c9-8db8-7d6d79d33636" path="/var/lib/kubelet/pods/d3e0dd1c-81ba-40c9-8db8-7d6d79d33636/volumes" Mar 14 05:57:03 crc kubenswrapper[4817]: I0314 05:57:03.364101 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e65fa238-9d14-4be7-ae7b-0b3eb077d575","Type":"ContainerStarted","Data":"7421d08a4dccfc6e94464c1608d79648931939bdda22ac577af16b8a21bab0c3"} Mar 14 05:57:03 crc kubenswrapper[4817]: I0314 05:57:03.385067 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.385043656 podStartE2EDuration="2.385043656s" podCreationTimestamp="2026-03-14 05:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:57:03.380697272 +0000 UTC m=+1477.418958008" watchObservedRunningTime="2026-03-14 05:57:03.385043656 +0000 UTC m=+1477.423304402" Mar 14 05:57:05 crc kubenswrapper[4817]: I0314 05:57:05.291996 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 05:57:06 crc kubenswrapper[4817]: I0314 05:57:06.787089 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 05:57:08 crc kubenswrapper[4817]: I0314 05:57:08.565381 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:57:08 crc kubenswrapper[4817]: I0314 05:57:08.565948 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:57:10 crc kubenswrapper[4817]: I0314 05:57:10.025515 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 05:57:10 crc kubenswrapper[4817]: I0314 05:57:10.026038 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 05:57:11 crc kubenswrapper[4817]: I0314 05:57:11.020438 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 05:57:11 crc kubenswrapper[4817]: I0314 05:57:11.020998 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 05:57:11 crc kubenswrapper[4817]: I0314 05:57:11.046802 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d905ec00-43c0-4f8b-a52d-414b74697fb2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:57:11 crc kubenswrapper[4817]: I0314 05:57:11.046836 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d905ec00-43c0-4f8b-a52d-414b74697fb2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 05:57:11 crc kubenswrapper[4817]: E0314 05:57:11.551322 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice\": RecentStats: unable to find data in memory cache]" Mar 14 05:57:11 crc kubenswrapper[4817]: I0314 05:57:11.787363 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 05:57:11 crc kubenswrapper[4817]: I0314 05:57:11.825220 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 05:57:12 crc kubenswrapper[4817]: I0314 05:57:12.103231 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd4cb5c8-0bac-4213-bc4d-42805d1b03f7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.194:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:57:12 crc kubenswrapper[4817]: I0314 05:57:12.103355 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd4cb5c8-0bac-4213-bc4d-42805d1b03f7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.194:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 05:57:12 crc kubenswrapper[4817]: I0314 05:57:12.528637 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 05:57:18 crc kubenswrapper[4817]: I0314 05:57:18.024815 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 05:57:18 crc kubenswrapper[4817]: I0314 05:57:18.025597 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 05:57:19 crc kubenswrapper[4817]: I0314 05:57:19.020332 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 05:57:19 crc kubenswrapper[4817]: I0314 05:57:19.020390 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 05:57:20 crc kubenswrapper[4817]: I0314 05:57:20.033469 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 05:57:20 crc kubenswrapper[4817]: I0314 05:57:20.035622 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 05:57:20 crc kubenswrapper[4817]: I0314 05:57:20.041287 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 05:57:20 crc kubenswrapper[4817]: I0314 05:57:20.606921 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 05:57:21 crc kubenswrapper[4817]: I0314 05:57:21.026042 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 05:57:21 crc kubenswrapper[4817]: I0314 05:57:21.027779 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 05:57:21 crc kubenswrapper[4817]: I0314 05:57:21.033887 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 05:57:21 crc kubenswrapper[4817]: I0314 05:57:21.627243 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 05:57:21 crc kubenswrapper[4817]: E0314 05:57:21.827198 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc659724_8af1_4a0d_abbc_e7f1d8190774.slice\": RecentStats: unable to find data in memory cache]" Mar 14 05:57:29 crc kubenswrapper[4817]: I0314 05:57:29.462669 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:57:31 crc kubenswrapper[4817]: I0314 05:57:31.063191 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:57:34 crc kubenswrapper[4817]: I0314 05:57:34.079324 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerName="rabbitmq" containerID="cri-o://0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9" gracePeriod=604796 Mar 14 05:57:35 crc kubenswrapper[4817]: I0314 05:57:35.785701 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="rabbitmq" containerID="cri-o://cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816" gracePeriod=604796 Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.566098 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.566620 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.566679 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.567639 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8509fe209e0d6b6f9ac3932c37444f6f3b02987e960352c4af8c1492f53dab1b"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.567690 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://8509fe209e0d6b6f9ac3932c37444f6f3b02987e960352c4af8c1492f53dab1b" gracePeriod=600 Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.802176 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="8509fe209e0d6b6f9ac3932c37444f6f3b02987e960352c4af8c1492f53dab1b" exitCode=0 Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.802249 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"8509fe209e0d6b6f9ac3932c37444f6f3b02987e960352c4af8c1492f53dab1b"} Mar 14 05:57:38 crc kubenswrapper[4817]: I0314 05:57:38.802602 4817 scope.go:117] "RemoveContainer" containerID="4cedc780ac8b4d762839f86c77e2e11ff9cb9f77222802713452641e56fdcbca" Mar 14 05:57:39 crc kubenswrapper[4817]: I0314 05:57:39.814207 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2"} Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.744374 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.830353 4817 generic.go:334] "Generic (PLEG): container finished" podID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerID="0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9" exitCode=0 Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.830653 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.830695 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa3ffb28-ad8e-4691-a5ff-ae17d083a019","Type":"ContainerDied","Data":"0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9"} Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.830802 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa3ffb28-ad8e-4691-a5ff-ae17d083a019","Type":"ContainerDied","Data":"83903c220563b6bc263ea2d00bb8243b6096e975d3c43ae9398fc9f15b3f99b2"} Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.830846 4817 scope.go:117] "RemoveContainer" containerID="0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.872586 4817 scope.go:117] "RemoveContainer" containerID="c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878203 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-erlang-cookie\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878321 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-plugins\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878391 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-tls\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878438 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-server-conf\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878574 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878611 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-confd\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878647 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-erlang-cookie-secret\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878744 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vslw5\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-kube-api-access-vslw5\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878826 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-pod-info\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878849 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-config-data\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.878882 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-plugins-conf\") pod \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\" (UID: \"aa3ffb28-ad8e-4691-a5ff-ae17d083a019\") " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.879229 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.880052 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.880221 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.880241 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.880871 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.891686 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.894434 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.894821 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.894983 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-pod-info" (OuterVolumeSpecName: "pod-info") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.912686 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-kube-api-access-vslw5" (OuterVolumeSpecName: "kube-api-access-vslw5") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "kube-api-access-vslw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.927309 4817 scope.go:117] "RemoveContainer" containerID="0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.928751 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-config-data" (OuterVolumeSpecName: "config-data") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: E0314 05:57:40.928192 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9\": container with ID starting with 0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9 not found: ID does not exist" containerID="0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.929317 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9"} err="failed to get container status \"0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9\": rpc error: code = NotFound desc = could not find container \"0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9\": container with ID starting with 0c6cb09680c9c8538f54f5a04df34148969b20f5dfb371a2ffa1a57b867113b9 not found: ID does not exist" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.929356 4817 scope.go:117] "RemoveContainer" containerID="c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e" Mar 14 05:57:40 crc kubenswrapper[4817]: E0314 05:57:40.929823 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e\": container with ID starting with c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e not found: ID does not exist" containerID="c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.929854 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e"} err="failed to get container status \"c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e\": rpc error: code = NotFound desc = could not find container \"c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e\": container with ID starting with c13dd87d92fb43d712018864420e12c4785e8855de414522f4a125e31652280e not found: ID does not exist" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.965581 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-server-conf" (OuterVolumeSpecName: "server-conf") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982876 4817 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982935 4817 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982947 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vslw5\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-kube-api-access-vslw5\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982958 4817 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-pod-info\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982970 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982982 4817 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.982992 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.983007 4817 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-server-conf\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:40 crc kubenswrapper[4817]: I0314 05:57:40.996961 4817 scope.go:117] "RemoveContainer" containerID="004fc33bcc3ca3b755b9d0a7ef51e4475285017a6406de6578362c3ab1fdce12" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.014404 4817 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.037829 4817 scope.go:117] "RemoveContainer" containerID="465bc17c5eb46eb039b8455167d1b8725fd8ae8d1993542f02cda40fa1323d4b" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.053100 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "aa3ffb28-ad8e-4691-a5ff-ae17d083a019" (UID: "aa3ffb28-ad8e-4691-a5ff-ae17d083a019"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.115210 4817 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.115263 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa3ffb28-ad8e-4691-a5ff-ae17d083a019-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.169704 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.181162 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.224124 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:57:41 crc kubenswrapper[4817]: E0314 05:57:41.224750 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerName="rabbitmq" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.224771 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerName="rabbitmq" Mar 14 05:57:41 crc kubenswrapper[4817]: E0314 05:57:41.224794 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerName="setup-container" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.224800 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerName="setup-container" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.224991 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" containerName="rabbitmq" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.226164 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.231619 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vcfmf" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.231862 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.232052 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.232176 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.232311 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.235219 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.240752 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.252129 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337308 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337497 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337547 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337572 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337615 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-config-data\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337644 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da2757e1-ec61-4c1f-8060-3da273bd77cd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337827 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.337962 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.338203 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da2757e1-ec61-4c1f-8060-3da273bd77cd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.338498 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mf9\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-kube-api-access-n9mf9\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.338688 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.403317 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441153 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441221 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441250 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441286 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-config-data\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441313 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da2757e1-ec61-4c1f-8060-3da273bd77cd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441346 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441375 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441435 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da2757e1-ec61-4c1f-8060-3da273bd77cd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441508 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mf9\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-kube-api-access-n9mf9\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441552 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441577 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.441662 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.442054 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.442325 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.442509 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.443147 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.443479 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da2757e1-ec61-4c1f-8060-3da273bd77cd-config-data\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.447511 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da2757e1-ec61-4c1f-8060-3da273bd77cd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.457824 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da2757e1-ec61-4c1f-8060-3da273bd77cd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.458407 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.458656 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.467794 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mf9\" (UniqueName: \"kubernetes.io/projected/da2757e1-ec61-4c1f-8060-3da273bd77cd-kube-api-access-n9mf9\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.474251 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"da2757e1-ec61-4c1f-8060-3da273bd77cd\") " pod="openstack/rabbitmq-server-0" Mar 14 05:57:41 crc kubenswrapper[4817]: I0314 05:57:41.577435 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.064740 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 05:57:42 crc kubenswrapper[4817]: W0314 05:57:42.075733 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda2757e1_ec61_4c1f_8060_3da273bd77cd.slice/crio-495c0838e6085c8169691091b00264570dab36638c0f796d1099a58e8914db0f WatchSource:0}: Error finding container 495c0838e6085c8169691091b00264570dab36638c0f796d1099a58e8914db0f: Status 404 returned error can't find the container with id 495c0838e6085c8169691091b00264570dab36638c0f796d1099a58e8914db0f Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.422533 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567152 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-erlang-cookie\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567207 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-server-conf\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567243 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xht8v\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-kube-api-access-xht8v\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567326 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-plugins-conf\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567375 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-plugins\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567435 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-confd\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567499 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f20e21c5-3d26-4494-a4d7-43323e059f31-pod-info\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567514 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-tls\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567594 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f20e21c5-3d26-4494-a4d7-43323e059f31-erlang-cookie-secret\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567637 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-config-data\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.567665 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f20e21c5-3d26-4494-a4d7-43323e059f31\" (UID: \"f20e21c5-3d26-4494-a4d7-43323e059f31\") " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.569651 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.570287 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.571043 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.575666 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-kube-api-access-xht8v" (OuterVolumeSpecName: "kube-api-access-xht8v") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "kube-api-access-xht8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.576414 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20e21c5-3d26-4494-a4d7-43323e059f31-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.578648 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.579098 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f20e21c5-3d26-4494-a4d7-43323e059f31-pod-info" (OuterVolumeSpecName: "pod-info") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.589492 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.650527 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-config-data" (OuterVolumeSpecName: "config-data") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669364 4817 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f20e21c5-3d26-4494-a4d7-43323e059f31-pod-info\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669406 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669416 4817 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f20e21c5-3d26-4494-a4d7-43323e059f31-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669426 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669447 4817 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669462 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669475 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xht8v\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-kube-api-access-xht8v\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669487 4817 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.669495 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.676363 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-server-conf" (OuterVolumeSpecName: "server-conf") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.702742 4817 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.766704 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3ffb28-ad8e-4691-a5ff-ae17d083a019" path="/var/lib/kubelet/pods/aa3ffb28-ad8e-4691-a5ff-ae17d083a019/volumes" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.772359 4817 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.772707 4817 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f20e21c5-3d26-4494-a4d7-43323e059f31-server-conf\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.791733 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f20e21c5-3d26-4494-a4d7-43323e059f31" (UID: "f20e21c5-3d26-4494-a4d7-43323e059f31"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.851099 4817 generic.go:334] "Generic (PLEG): container finished" podID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerID="cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816" exitCode=0 Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.851170 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.851184 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f20e21c5-3d26-4494-a4d7-43323e059f31","Type":"ContainerDied","Data":"cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816"} Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.851214 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f20e21c5-3d26-4494-a4d7-43323e059f31","Type":"ContainerDied","Data":"0c0a06beec4e579f33b9eac144b521ad83125abef800f71f4b58ba7c01d82664"} Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.851233 4817 scope.go:117] "RemoveContainer" containerID="cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.854043 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da2757e1-ec61-4c1f-8060-3da273bd77cd","Type":"ContainerStarted","Data":"495c0838e6085c8169691091b00264570dab36638c0f796d1099a58e8914db0f"} Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.874726 4817 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f20e21c5-3d26-4494-a4d7-43323e059f31-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.880778 4817 scope.go:117] "RemoveContainer" containerID="52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.896229 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.903785 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.935789 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.936086 4817 scope.go:117] "RemoveContainer" containerID="cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816" Mar 14 05:57:42 crc kubenswrapper[4817]: E0314 05:57:42.936414 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="rabbitmq" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.936443 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="rabbitmq" Mar 14 05:57:42 crc kubenswrapper[4817]: E0314 05:57:42.936533 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816\": container with ID starting with cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816 not found: ID does not exist" containerID="cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.936563 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816"} err="failed to get container status \"cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816\": rpc error: code = NotFound desc = could not find container \"cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816\": container with ID starting with cab3ea4c620f658a6ac2835d8016bc5996923ad6a0c14d96c01c210784c1d816 not found: ID does not exist" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.936592 4817 scope.go:117] "RemoveContainer" containerID="52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997" Mar 14 05:57:42 crc kubenswrapper[4817]: E0314 05:57:42.936678 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="setup-container" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.936692 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="setup-container" Mar 14 05:57:42 crc kubenswrapper[4817]: E0314 05:57:42.936791 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997\": container with ID starting with 52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997 not found: ID does not exist" containerID="52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.936821 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997"} err="failed to get container status \"52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997\": rpc error: code = NotFound desc = could not find container \"52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997\": container with ID starting with 52c696a3ec3d0320a0d65c70a34b2a6b64bb507d8c554e3b55adf284e6965997 not found: ID does not exist" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.937099 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" containerName="rabbitmq" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.951863 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.952013 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.956172 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.956474 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.956664 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.956782 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.957094 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-84h2h" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.957578 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 14 05:57:42 crc kubenswrapper[4817]: I0314 05:57:42.960392 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.078428 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.078491 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.078550 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtc5\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-kube-api-access-7mtc5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.078812 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.078888 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.078974 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.079035 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.079207 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9ecd33b-91dc-44fc-932d-d962a7835af9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.079270 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9ecd33b-91dc-44fc-932d-d962a7835af9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.079326 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.079343 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.181879 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.181968 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182011 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtc5\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-kube-api-access-7mtc5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182050 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182079 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182110 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182141 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182203 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9ecd33b-91dc-44fc-932d-d962a7835af9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182237 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9ecd33b-91dc-44fc-932d-d962a7835af9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182269 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.182293 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.183398 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.183535 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.183740 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.183828 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.184656 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9ecd33b-91dc-44fc-932d-d962a7835af9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.185072 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.189221 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.189411 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.191732 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9ecd33b-91dc-44fc-932d-d962a7835af9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.192918 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9ecd33b-91dc-44fc-932d-d962a7835af9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.210598 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtc5\" (UniqueName: \"kubernetes.io/projected/c9ecd33b-91dc-44fc-932d-d962a7835af9-kube-api-access-7mtc5\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.265497 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9ecd33b-91dc-44fc-932d-d962a7835af9\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.292931 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.865873 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da2757e1-ec61-4c1f-8060-3da273bd77cd","Type":"ContainerStarted","Data":"32ee5a8d4c3f4b5caa78f040eb5b4e35bc4b3d12751615f99eb867dadc73e7bd"} Mar 14 05:57:43 crc kubenswrapper[4817]: I0314 05:57:43.980020 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 05:57:43 crc kubenswrapper[4817]: W0314 05:57:43.981669 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ecd33b_91dc_44fc_932d_d962a7835af9.slice/crio-cff2d3af9ed81755338dadfae914aa508623cf85f0141e1865fd4d01d67829fa WatchSource:0}: Error finding container cff2d3af9ed81755338dadfae914aa508623cf85f0141e1865fd4d01d67829fa: Status 404 returned error can't find the container with id cff2d3af9ed81755338dadfae914aa508623cf85f0141e1865fd4d01d67829fa Mar 14 05:57:44 crc kubenswrapper[4817]: I0314 05:57:44.744513 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20e21c5-3d26-4494-a4d7-43323e059f31" path="/var/lib/kubelet/pods/f20e21c5-3d26-4494-a4d7-43323e059f31/volumes" Mar 14 05:57:44 crc kubenswrapper[4817]: I0314 05:57:44.880794 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9ecd33b-91dc-44fc-932d-d962a7835af9","Type":"ContainerStarted","Data":"cff2d3af9ed81755338dadfae914aa508623cf85f0141e1865fd4d01d67829fa"} Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.063151 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-78mjz"] Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.064858 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.068368 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.083392 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-78mjz"] Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.133167 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-config\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.133258 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdwv\" (UniqueName: \"kubernetes.io/projected/90741a77-24ca-4326-931c-d755f5926519-kube-api-access-wxdwv\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.133282 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.133303 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.133348 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.133380 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-dns-svc\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.235751 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.235864 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-dns-svc\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.236023 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-config\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.236159 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdwv\" (UniqueName: \"kubernetes.io/projected/90741a77-24ca-4326-931c-d755f5926519-kube-api-access-wxdwv\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.236197 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.236228 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.237140 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.237146 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-dns-svc\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.237204 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-config\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.237282 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.237496 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.255205 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdwv\" (UniqueName: \"kubernetes.io/projected/90741a77-24ca-4326-931c-d755f5926519-kube-api-access-wxdwv\") pod \"dnsmasq-dns-578b8d767c-78mjz\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.384098 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.891644 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-78mjz"] Mar 14 05:57:45 crc kubenswrapper[4817]: I0314 05:57:45.918161 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9ecd33b-91dc-44fc-932d-d962a7835af9","Type":"ContainerStarted","Data":"8f8fcec0386b425e1c7d21e15a517ac1799845e398d9df44d2a3492328eef519"} Mar 14 05:57:46 crc kubenswrapper[4817]: I0314 05:57:46.934201 4817 generic.go:334] "Generic (PLEG): container finished" podID="90741a77-24ca-4326-931c-d755f5926519" containerID="5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e" exitCode=0 Mar 14 05:57:46 crc kubenswrapper[4817]: I0314 05:57:46.935922 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" event={"ID":"90741a77-24ca-4326-931c-d755f5926519","Type":"ContainerDied","Data":"5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e"} Mar 14 05:57:46 crc kubenswrapper[4817]: I0314 05:57:46.935962 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" event={"ID":"90741a77-24ca-4326-931c-d755f5926519","Type":"ContainerStarted","Data":"06bd3f8b20fb42bb740f51640b741739aea7749aacb627dc6e42441b06acd97d"} Mar 14 05:57:47 crc kubenswrapper[4817]: I0314 05:57:47.946354 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" event={"ID":"90741a77-24ca-4326-931c-d755f5926519","Type":"ContainerStarted","Data":"2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4"} Mar 14 05:57:47 crc kubenswrapper[4817]: I0314 05:57:47.946570 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:47 crc kubenswrapper[4817]: I0314 05:57:47.964377 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" podStartSLOduration=2.964353985 podStartE2EDuration="2.964353985s" podCreationTimestamp="2026-03-14 05:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:57:47.962242325 +0000 UTC m=+1522.000503081" watchObservedRunningTime="2026-03-14 05:57:47.964353985 +0000 UTC m=+1522.002614721" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.386138 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.463013 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zkwzm"] Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.463365 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerName="dnsmasq-dns" containerID="cri-o://969a12ead2d02c5597229525ea242f021b097283c3fbfa27a9dfa306f2429620" gracePeriod=10 Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.790835 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-7p784"] Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.809860 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-7p784"] Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.810459 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.899802 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.899871 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.899923 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-config\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.900010 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.900075 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpxv\" (UniqueName: \"kubernetes.io/projected/ca487333-68c9-470e-b299-c8331d9b59b6-kube-api-access-mcpxv\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:55 crc kubenswrapper[4817]: I0314 05:57:55.900096 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.001735 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.001827 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpxv\" (UniqueName: \"kubernetes.io/projected/ca487333-68c9-470e-b299-c8331d9b59b6-kube-api-access-mcpxv\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.001848 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.001940 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.001967 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.001988 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-config\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.003291 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-config\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.003945 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.004543 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.004986 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.007557 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.033329 4817 generic.go:334] "Generic (PLEG): container finished" podID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerID="969a12ead2d02c5597229525ea242f021b097283c3fbfa27a9dfa306f2429620" exitCode=0 Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.033393 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" event={"ID":"bd3dcc2c-8045-48a2-a884-07978341aef4","Type":"ContainerDied","Data":"969a12ead2d02c5597229525ea242f021b097283c3fbfa27a9dfa306f2429620"} Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.033433 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" event={"ID":"bd3dcc2c-8045-48a2-a884-07978341aef4","Type":"ContainerDied","Data":"59751bb4e6cf2288ef1728119a4a58c0249d07e64eae762952859530336de65a"} Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.033452 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59751bb4e6cf2288ef1728119a4a58c0249d07e64eae762952859530336de65a" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.044496 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpxv\" (UniqueName: \"kubernetes.io/projected/ca487333-68c9-470e-b299-c8331d9b59b6-kube-api-access-mcpxv\") pod \"dnsmasq-dns-fbc59fbb7-7p784\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.128441 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.136676 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.210700 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-nb\") pod \"bd3dcc2c-8045-48a2-a884-07978341aef4\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.210836 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-dns-svc\") pod \"bd3dcc2c-8045-48a2-a884-07978341aef4\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.210920 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-config\") pod \"bd3dcc2c-8045-48a2-a884-07978341aef4\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.210968 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn4pr\" (UniqueName: \"kubernetes.io/projected/bd3dcc2c-8045-48a2-a884-07978341aef4-kube-api-access-qn4pr\") pod \"bd3dcc2c-8045-48a2-a884-07978341aef4\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.211193 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-sb\") pod \"bd3dcc2c-8045-48a2-a884-07978341aef4\" (UID: \"bd3dcc2c-8045-48a2-a884-07978341aef4\") " Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.224145 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3dcc2c-8045-48a2-a884-07978341aef4-kube-api-access-qn4pr" (OuterVolumeSpecName: "kube-api-access-qn4pr") pod "bd3dcc2c-8045-48a2-a884-07978341aef4" (UID: "bd3dcc2c-8045-48a2-a884-07978341aef4"). InnerVolumeSpecName "kube-api-access-qn4pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.305762 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-config" (OuterVolumeSpecName: "config") pod "bd3dcc2c-8045-48a2-a884-07978341aef4" (UID: "bd3dcc2c-8045-48a2-a884-07978341aef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.317465 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.317504 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn4pr\" (UniqueName: \"kubernetes.io/projected/bd3dcc2c-8045-48a2-a884-07978341aef4-kube-api-access-qn4pr\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.349600 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd3dcc2c-8045-48a2-a884-07978341aef4" (UID: "bd3dcc2c-8045-48a2-a884-07978341aef4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.365123 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd3dcc2c-8045-48a2-a884-07978341aef4" (UID: "bd3dcc2c-8045-48a2-a884-07978341aef4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.403785 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd3dcc2c-8045-48a2-a884-07978341aef4" (UID: "bd3dcc2c-8045-48a2-a884-07978341aef4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.419244 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.419288 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.419303 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3dcc2c-8045-48a2-a884-07978341aef4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:57:56 crc kubenswrapper[4817]: I0314 05:57:56.672532 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-7p784"] Mar 14 05:57:56 crc kubenswrapper[4817]: W0314 05:57:56.685860 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca487333_68c9_470e_b299_c8331d9b59b6.slice/crio-b4a7eee2919098274e07f69a96406263905fc0867bfef3f230f88741b88a60fa WatchSource:0}: Error finding container b4a7eee2919098274e07f69a96406263905fc0867bfef3f230f88741b88a60fa: Status 404 returned error can't find the container with id b4a7eee2919098274e07f69a96406263905fc0867bfef3f230f88741b88a60fa Mar 14 05:57:57 crc kubenswrapper[4817]: I0314 05:57:57.063566 4817 generic.go:334] "Generic (PLEG): container finished" podID="ca487333-68c9-470e-b299-c8331d9b59b6" containerID="a6a373c7926c77256e248e3649be47eeaec888e378721e3693a632720e3ac6dd" exitCode=0 Mar 14 05:57:57 crc kubenswrapper[4817]: I0314 05:57:57.064066 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-zkwzm" Mar 14 05:57:57 crc kubenswrapper[4817]: I0314 05:57:57.064383 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" event={"ID":"ca487333-68c9-470e-b299-c8331d9b59b6","Type":"ContainerDied","Data":"a6a373c7926c77256e248e3649be47eeaec888e378721e3693a632720e3ac6dd"} Mar 14 05:57:57 crc kubenswrapper[4817]: I0314 05:57:57.064455 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" event={"ID":"ca487333-68c9-470e-b299-c8331d9b59b6","Type":"ContainerStarted","Data":"b4a7eee2919098274e07f69a96406263905fc0867bfef3f230f88741b88a60fa"} Mar 14 05:57:57 crc kubenswrapper[4817]: I0314 05:57:57.128323 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zkwzm"] Mar 14 05:57:57 crc kubenswrapper[4817]: I0314 05:57:57.137270 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-zkwzm"] Mar 14 05:57:58 crc kubenswrapper[4817]: I0314 05:57:58.075510 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" event={"ID":"ca487333-68c9-470e-b299-c8331d9b59b6","Type":"ContainerStarted","Data":"3965b2739a25882622ae903fd3aa0e48df1d74dad558117e1161ed1c5de16668"} Mar 14 05:57:58 crc kubenswrapper[4817]: I0314 05:57:58.076137 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:57:58 crc kubenswrapper[4817]: I0314 05:57:58.748115 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" path="/var/lib/kubelet/pods/bd3dcc2c-8045-48a2-a884-07978341aef4/volumes" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.130711 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" podStartSLOduration=5.130685927 podStartE2EDuration="5.130685927s" podCreationTimestamp="2026-03-14 05:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:57:58.104290045 +0000 UTC m=+1532.142550801" watchObservedRunningTime="2026-03-14 05:58:00.130685927 +0000 UTC m=+1534.168946673" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.140245 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557798-f5p6d"] Mar 14 05:58:00 crc kubenswrapper[4817]: E0314 05:58:00.140774 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerName="init" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.140803 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerName="init" Mar 14 05:58:00 crc kubenswrapper[4817]: E0314 05:58:00.140855 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerName="dnsmasq-dns" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.140868 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerName="dnsmasq-dns" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.141125 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3dcc2c-8045-48a2-a884-07978341aef4" containerName="dnsmasq-dns" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.142053 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.147262 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.147743 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.148820 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.158516 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557798-f5p6d"] Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.299958 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t94gb\" (UniqueName: \"kubernetes.io/projected/cba77f52-4903-4483-a9f1-86e1a42cd513-kube-api-access-t94gb\") pod \"auto-csr-approver-29557798-f5p6d\" (UID: \"cba77f52-4903-4483-a9f1-86e1a42cd513\") " pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.404408 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t94gb\" (UniqueName: \"kubernetes.io/projected/cba77f52-4903-4483-a9f1-86e1a42cd513-kube-api-access-t94gb\") pod \"auto-csr-approver-29557798-f5p6d\" (UID: \"cba77f52-4903-4483-a9f1-86e1a42cd513\") " pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.444000 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t94gb\" (UniqueName: \"kubernetes.io/projected/cba77f52-4903-4483-a9f1-86e1a42cd513-kube-api-access-t94gb\") pod \"auto-csr-approver-29557798-f5p6d\" (UID: \"cba77f52-4903-4483-a9f1-86e1a42cd513\") " pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.472737 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:00 crc kubenswrapper[4817]: I0314 05:58:00.919095 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557798-f5p6d"] Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.115210 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" event={"ID":"cba77f52-4903-4483-a9f1-86e1a42cd513","Type":"ContainerStarted","Data":"ebd42f7ab6f8be67a4a2d9198d26a7d3a054a19943c3da3b09d55c5a19e746b4"} Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.557400 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5gmdh"] Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.560004 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.573329 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gmdh"] Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.739528 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjjv\" (UniqueName: \"kubernetes.io/projected/4f7c277b-c601-41d3-9a76-5b5b13803d4f-kube-api-access-xcjjv\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.740272 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-catalog-content\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.740332 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-utilities\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.841487 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-catalog-content\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.841565 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-utilities\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.841597 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjjv\" (UniqueName: \"kubernetes.io/projected/4f7c277b-c601-41d3-9a76-5b5b13803d4f-kube-api-access-xcjjv\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.842287 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-utilities\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.842667 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-catalog-content\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.871702 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjjv\" (UniqueName: \"kubernetes.io/projected/4f7c277b-c601-41d3-9a76-5b5b13803d4f-kube-api-access-xcjjv\") pod \"community-operators-5gmdh\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:01 crc kubenswrapper[4817]: I0314 05:58:01.881223 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:02 crc kubenswrapper[4817]: I0314 05:58:02.532981 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gmdh"] Mar 14 05:58:02 crc kubenswrapper[4817]: W0314 05:58:02.538250 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7c277b_c601_41d3_9a76_5b5b13803d4f.slice/crio-86f67c2ee53dcf4a1120a694fde0317f8d7e96f63f3bd871247d2ba6eba0a6fd WatchSource:0}: Error finding container 86f67c2ee53dcf4a1120a694fde0317f8d7e96f63f3bd871247d2ba6eba0a6fd: Status 404 returned error can't find the container with id 86f67c2ee53dcf4a1120a694fde0317f8d7e96f63f3bd871247d2ba6eba0a6fd Mar 14 05:58:03 crc kubenswrapper[4817]: I0314 05:58:03.148442 4817 generic.go:334] "Generic (PLEG): container finished" podID="cba77f52-4903-4483-a9f1-86e1a42cd513" containerID="455c57098b48d646a39bc30ff18c513d9a2aae6123587538e8e8ddf5640ef9ba" exitCode=0 Mar 14 05:58:03 crc kubenswrapper[4817]: I0314 05:58:03.148964 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" event={"ID":"cba77f52-4903-4483-a9f1-86e1a42cd513","Type":"ContainerDied","Data":"455c57098b48d646a39bc30ff18c513d9a2aae6123587538e8e8ddf5640ef9ba"} Mar 14 05:58:03 crc kubenswrapper[4817]: I0314 05:58:03.154546 4817 generic.go:334] "Generic (PLEG): container finished" podID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerID="542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03" exitCode=0 Mar 14 05:58:03 crc kubenswrapper[4817]: I0314 05:58:03.154592 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gmdh" event={"ID":"4f7c277b-c601-41d3-9a76-5b5b13803d4f","Type":"ContainerDied","Data":"542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03"} Mar 14 05:58:03 crc kubenswrapper[4817]: I0314 05:58:03.154617 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gmdh" event={"ID":"4f7c277b-c601-41d3-9a76-5b5b13803d4f","Type":"ContainerStarted","Data":"86f67c2ee53dcf4a1120a694fde0317f8d7e96f63f3bd871247d2ba6eba0a6fd"} Mar 14 05:58:04 crc kubenswrapper[4817]: I0314 05:58:04.168317 4817 generic.go:334] "Generic (PLEG): container finished" podID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerID="be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1" exitCode=0 Mar 14 05:58:04 crc kubenswrapper[4817]: I0314 05:58:04.168379 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gmdh" event={"ID":"4f7c277b-c601-41d3-9a76-5b5b13803d4f","Type":"ContainerDied","Data":"be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1"} Mar 14 05:58:04 crc kubenswrapper[4817]: I0314 05:58:04.605656 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:04 crc kubenswrapper[4817]: I0314 05:58:04.706172 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t94gb\" (UniqueName: \"kubernetes.io/projected/cba77f52-4903-4483-a9f1-86e1a42cd513-kube-api-access-t94gb\") pod \"cba77f52-4903-4483-a9f1-86e1a42cd513\" (UID: \"cba77f52-4903-4483-a9f1-86e1a42cd513\") " Mar 14 05:58:04 crc kubenswrapper[4817]: I0314 05:58:04.715145 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba77f52-4903-4483-a9f1-86e1a42cd513-kube-api-access-t94gb" (OuterVolumeSpecName: "kube-api-access-t94gb") pod "cba77f52-4903-4483-a9f1-86e1a42cd513" (UID: "cba77f52-4903-4483-a9f1-86e1a42cd513"). InnerVolumeSpecName "kube-api-access-t94gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:58:04 crc kubenswrapper[4817]: I0314 05:58:04.808939 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t94gb\" (UniqueName: \"kubernetes.io/projected/cba77f52-4903-4483-a9f1-86e1a42cd513-kube-api-access-t94gb\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.183794 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gmdh" event={"ID":"4f7c277b-c601-41d3-9a76-5b5b13803d4f","Type":"ContainerStarted","Data":"00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0"} Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.189430 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" event={"ID":"cba77f52-4903-4483-a9f1-86e1a42cd513","Type":"ContainerDied","Data":"ebd42f7ab6f8be67a4a2d9198d26a7d3a054a19943c3da3b09d55c5a19e746b4"} Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.189883 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebd42f7ab6f8be67a4a2d9198d26a7d3a054a19943c3da3b09d55c5a19e746b4" Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.189983 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557798-f5p6d" Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.223535 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5gmdh" podStartSLOduration=2.786207477 podStartE2EDuration="4.223511343s" podCreationTimestamp="2026-03-14 05:58:01 +0000 UTC" firstStartedPulling="2026-03-14 05:58:03.157178152 +0000 UTC m=+1537.195438898" lastFinishedPulling="2026-03-14 05:58:04.594482018 +0000 UTC m=+1538.632742764" observedRunningTime="2026-03-14 05:58:05.219273452 +0000 UTC m=+1539.257534208" watchObservedRunningTime="2026-03-14 05:58:05.223511343 +0000 UTC m=+1539.261772089" Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.745670 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557792-b8788"] Mar 14 05:58:05 crc kubenswrapper[4817]: I0314 05:58:05.759736 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557792-b8788"] Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.138570 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.219261 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-78mjz"] Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.219574 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" podUID="90741a77-24ca-4326-931c-d755f5926519" containerName="dnsmasq-dns" containerID="cri-o://2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4" gracePeriod=10 Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.748875 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7526535b-7dd6-4ccb-837d-121b93caeb5c" path="/var/lib/kubelet/pods/7526535b-7dd6-4ccb-837d-121b93caeb5c/volumes" Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.879156 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.979183 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-dns-svc\") pod \"90741a77-24ca-4326-931c-d755f5926519\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.979260 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-config\") pod \"90741a77-24ca-4326-931c-d755f5926519\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.979344 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdwv\" (UniqueName: \"kubernetes.io/projected/90741a77-24ca-4326-931c-d755f5926519-kube-api-access-wxdwv\") pod \"90741a77-24ca-4326-931c-d755f5926519\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.979417 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-nb\") pod \"90741a77-24ca-4326-931c-d755f5926519\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.979484 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-sb\") pod \"90741a77-24ca-4326-931c-d755f5926519\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.979605 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-openstack-edpm-ipam\") pod \"90741a77-24ca-4326-931c-d755f5926519\" (UID: \"90741a77-24ca-4326-931c-d755f5926519\") " Mar 14 05:58:06 crc kubenswrapper[4817]: I0314 05:58:06.992415 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90741a77-24ca-4326-931c-d755f5926519-kube-api-access-wxdwv" (OuterVolumeSpecName: "kube-api-access-wxdwv") pod "90741a77-24ca-4326-931c-d755f5926519" (UID: "90741a77-24ca-4326-931c-d755f5926519"). InnerVolumeSpecName "kube-api-access-wxdwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.033693 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90741a77-24ca-4326-931c-d755f5926519" (UID: "90741a77-24ca-4326-931c-d755f5926519"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.037717 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90741a77-24ca-4326-931c-d755f5926519" (UID: "90741a77-24ca-4326-931c-d755f5926519"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.042574 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90741a77-24ca-4326-931c-d755f5926519" (UID: "90741a77-24ca-4326-931c-d755f5926519"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.044444 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-config" (OuterVolumeSpecName: "config") pod "90741a77-24ca-4326-931c-d755f5926519" (UID: "90741a77-24ca-4326-931c-d755f5926519"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.056802 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "90741a77-24ca-4326-931c-d755f5926519" (UID: "90741a77-24ca-4326-931c-d755f5926519"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.083077 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.083117 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-config\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.083130 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdwv\" (UniqueName: \"kubernetes.io/projected/90741a77-24ca-4326-931c-d755f5926519-kube-api-access-wxdwv\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.083142 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.083155 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.083165 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90741a77-24ca-4326-931c-d755f5926519-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.225729 4817 generic.go:334] "Generic (PLEG): container finished" podID="90741a77-24ca-4326-931c-d755f5926519" containerID="2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4" exitCode=0 Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.225987 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" event={"ID":"90741a77-24ca-4326-931c-d755f5926519","Type":"ContainerDied","Data":"2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4"} Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.226246 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" event={"ID":"90741a77-24ca-4326-931c-d755f5926519","Type":"ContainerDied","Data":"06bd3f8b20fb42bb740f51640b741739aea7749aacb627dc6e42441b06acd97d"} Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.226277 4817 scope.go:117] "RemoveContainer" containerID="2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.226107 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-78mjz" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.268993 4817 scope.go:117] "RemoveContainer" containerID="5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.286929 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-78mjz"] Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.298590 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-78mjz"] Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.312549 4817 scope.go:117] "RemoveContainer" containerID="2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4" Mar 14 05:58:07 crc kubenswrapper[4817]: E0314 05:58:07.313706 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4\": container with ID starting with 2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4 not found: ID does not exist" containerID="2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.313768 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4"} err="failed to get container status \"2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4\": rpc error: code = NotFound desc = could not find container \"2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4\": container with ID starting with 2f03946f3673c7c9f676309ab2a7cc3499744199ef6141f643a12ec7cb98bde4 not found: ID does not exist" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.313801 4817 scope.go:117] "RemoveContainer" containerID="5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e" Mar 14 05:58:07 crc kubenswrapper[4817]: E0314 05:58:07.314495 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e\": container with ID starting with 5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e not found: ID does not exist" containerID="5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e" Mar 14 05:58:07 crc kubenswrapper[4817]: I0314 05:58:07.314567 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e"} err="failed to get container status \"5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e\": rpc error: code = NotFound desc = could not find container \"5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e\": container with ID starting with 5d25265e46ddc658a9f83566b630be7b8dce8a5ff3cd8ba6aafe286494612d7e not found: ID does not exist" Mar 14 05:58:08 crc kubenswrapper[4817]: I0314 05:58:08.748450 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90741a77-24ca-4326-931c-d755f5926519" path="/var/lib/kubelet/pods/90741a77-24ca-4326-931c-d755f5926519/volumes" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.817653 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs"] Mar 14 05:58:11 crc kubenswrapper[4817]: E0314 05:58:11.818451 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90741a77-24ca-4326-931c-d755f5926519" containerName="dnsmasq-dns" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.818485 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="90741a77-24ca-4326-931c-d755f5926519" containerName="dnsmasq-dns" Mar 14 05:58:11 crc kubenswrapper[4817]: E0314 05:58:11.818505 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba77f52-4903-4483-a9f1-86e1a42cd513" containerName="oc" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.818513 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba77f52-4903-4483-a9f1-86e1a42cd513" containerName="oc" Mar 14 05:58:11 crc kubenswrapper[4817]: E0314 05:58:11.818534 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90741a77-24ca-4326-931c-d755f5926519" containerName="init" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.818540 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="90741a77-24ca-4326-931c-d755f5926519" containerName="init" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.818722 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba77f52-4903-4483-a9f1-86e1a42cd513" containerName="oc" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.818737 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="90741a77-24ca-4326-931c-d755f5926519" containerName="dnsmasq-dns" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.819462 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.825696 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.825943 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.827402 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.829099 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.841569 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs"] Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.881762 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.881827 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.913328 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.914978 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.915226 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.915344 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmjd\" (UniqueName: \"kubernetes.io/projected/e7119be8-fb6d-4bb9-8604-10d36abd643f-kube-api-access-rqmjd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:11 crc kubenswrapper[4817]: I0314 05:58:11.931420 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.017371 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.017483 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.017530 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmjd\" (UniqueName: \"kubernetes.io/projected/e7119be8-fb6d-4bb9-8604-10d36abd643f-kube-api-access-rqmjd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.017582 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.024726 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.024884 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.029083 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.042091 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmjd\" (UniqueName: \"kubernetes.io/projected/e7119be8-fb6d-4bb9-8604-10d36abd643f-kube-api-access-rqmjd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.144041 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.359858 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.435211 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5gmdh"] Mar 14 05:58:12 crc kubenswrapper[4817]: I0314 05:58:12.765771 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs"] Mar 14 05:58:13 crc kubenswrapper[4817]: I0314 05:58:13.305243 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" event={"ID":"e7119be8-fb6d-4bb9-8604-10d36abd643f","Type":"ContainerStarted","Data":"3d0614b4ce31a4c52fb383b19afc03884cbf1699baeb5f8b327709bd31a5991c"} Mar 14 05:58:14 crc kubenswrapper[4817]: I0314 05:58:14.315845 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5gmdh" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="registry-server" containerID="cri-o://00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0" gracePeriod=2 Mar 14 05:58:14 crc kubenswrapper[4817]: I0314 05:58:14.957571 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.105438 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcjjv\" (UniqueName: \"kubernetes.io/projected/4f7c277b-c601-41d3-9a76-5b5b13803d4f-kube-api-access-xcjjv\") pod \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.105660 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-catalog-content\") pod \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.105712 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-utilities\") pod \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\" (UID: \"4f7c277b-c601-41d3-9a76-5b5b13803d4f\") " Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.106863 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-utilities" (OuterVolumeSpecName: "utilities") pod "4f7c277b-c601-41d3-9a76-5b5b13803d4f" (UID: "4f7c277b-c601-41d3-9a76-5b5b13803d4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.122992 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7c277b-c601-41d3-9a76-5b5b13803d4f-kube-api-access-xcjjv" (OuterVolumeSpecName: "kube-api-access-xcjjv") pod "4f7c277b-c601-41d3-9a76-5b5b13803d4f" (UID: "4f7c277b-c601-41d3-9a76-5b5b13803d4f"). InnerVolumeSpecName "kube-api-access-xcjjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.169993 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f7c277b-c601-41d3-9a76-5b5b13803d4f" (UID: "4f7c277b-c601-41d3-9a76-5b5b13803d4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.208497 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcjjv\" (UniqueName: \"kubernetes.io/projected/4f7c277b-c601-41d3-9a76-5b5b13803d4f-kube-api-access-xcjjv\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.208546 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.208561 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f7c277b-c601-41d3-9a76-5b5b13803d4f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.336375 4817 generic.go:334] "Generic (PLEG): container finished" podID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerID="00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0" exitCode=0 Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.336427 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gmdh" event={"ID":"4f7c277b-c601-41d3-9a76-5b5b13803d4f","Type":"ContainerDied","Data":"00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0"} Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.336467 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gmdh" event={"ID":"4f7c277b-c601-41d3-9a76-5b5b13803d4f","Type":"ContainerDied","Data":"86f67c2ee53dcf4a1120a694fde0317f8d7e96f63f3bd871247d2ba6eba0a6fd"} Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.336490 4817 scope.go:117] "RemoveContainer" containerID="00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.336645 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gmdh" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.370847 4817 scope.go:117] "RemoveContainer" containerID="be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.385313 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5gmdh"] Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.400357 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5gmdh"] Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.433333 4817 scope.go:117] "RemoveContainer" containerID="542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.461234 4817 scope.go:117] "RemoveContainer" containerID="00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0" Mar 14 05:58:15 crc kubenswrapper[4817]: E0314 05:58:15.461735 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0\": container with ID starting with 00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0 not found: ID does not exist" containerID="00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.461858 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0"} err="failed to get container status \"00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0\": rpc error: code = NotFound desc = could not find container \"00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0\": container with ID starting with 00515d4b807930076d7d9db4ba9d1c7078b7f82efa02aaf2e2f247a5375183e0 not found: ID does not exist" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.461920 4817 scope.go:117] "RemoveContainer" containerID="be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1" Mar 14 05:58:15 crc kubenswrapper[4817]: E0314 05:58:15.462617 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1\": container with ID starting with be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1 not found: ID does not exist" containerID="be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.462678 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1"} err="failed to get container status \"be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1\": rpc error: code = NotFound desc = could not find container \"be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1\": container with ID starting with be4a8d4e5ee0ca7f51d20c850fa7d7b6bfb30f9da5b69da1b95396c574e6caf1 not found: ID does not exist" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.462710 4817 scope.go:117] "RemoveContainer" containerID="542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03" Mar 14 05:58:15 crc kubenswrapper[4817]: E0314 05:58:15.463091 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03\": container with ID starting with 542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03 not found: ID does not exist" containerID="542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03" Mar 14 05:58:15 crc kubenswrapper[4817]: I0314 05:58:15.463119 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03"} err="failed to get container status \"542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03\": rpc error: code = NotFound desc = could not find container \"542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03\": container with ID starting with 542537ccadde7a132fb8cab3c7d81f0fd59b12e97817e3c042b395cec037ba03 not found: ID does not exist" Mar 14 05:58:16 crc kubenswrapper[4817]: I0314 05:58:16.348325 4817 generic.go:334] "Generic (PLEG): container finished" podID="da2757e1-ec61-4c1f-8060-3da273bd77cd" containerID="32ee5a8d4c3f4b5caa78f040eb5b4e35bc4b3d12751615f99eb867dadc73e7bd" exitCode=0 Mar 14 05:58:16 crc kubenswrapper[4817]: I0314 05:58:16.348464 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da2757e1-ec61-4c1f-8060-3da273bd77cd","Type":"ContainerDied","Data":"32ee5a8d4c3f4b5caa78f040eb5b4e35bc4b3d12751615f99eb867dadc73e7bd"} Mar 14 05:58:16 crc kubenswrapper[4817]: I0314 05:58:16.755292 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" path="/var/lib/kubelet/pods/4f7c277b-c601-41d3-9a76-5b5b13803d4f/volumes" Mar 14 05:58:18 crc kubenswrapper[4817]: I0314 05:58:18.402689 4817 generic.go:334] "Generic (PLEG): container finished" podID="c9ecd33b-91dc-44fc-932d-d962a7835af9" containerID="8f8fcec0386b425e1c7d21e15a517ac1799845e398d9df44d2a3492328eef519" exitCode=0 Mar 14 05:58:18 crc kubenswrapper[4817]: I0314 05:58:18.403124 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9ecd33b-91dc-44fc-932d-d962a7835af9","Type":"ContainerDied","Data":"8f8fcec0386b425e1c7d21e15a517ac1799845e398d9df44d2a3492328eef519"} Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.468942 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da2757e1-ec61-4c1f-8060-3da273bd77cd","Type":"ContainerStarted","Data":"880744cf4656832a7cbb8a253956c05509325243b70d2ca7de6c5a07ef823570"} Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.470094 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.470716 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" event={"ID":"e7119be8-fb6d-4bb9-8604-10d36abd643f","Type":"ContainerStarted","Data":"0cdbc84f97fc936be0e3367287c9fe3e0f333bb79c22911c6eca400b13cb236f"} Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.472800 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9ecd33b-91dc-44fc-932d-d962a7835af9","Type":"ContainerStarted","Data":"4a570a58266c9563fcc4ee7212f5857475994b5f826fa268f5ac06dd59fea305"} Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.473132 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.534701 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.534670418 podStartE2EDuration="42.534670418s" podCreationTimestamp="2026-03-14 05:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:58:23.524026234 +0000 UTC m=+1557.562287000" watchObservedRunningTime="2026-03-14 05:58:23.534670418 +0000 UTC m=+1557.572931164" Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.580431 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" podStartSLOduration=2.499280811 podStartE2EDuration="12.580399644s" podCreationTimestamp="2026-03-14 05:58:11 +0000 UTC" firstStartedPulling="2026-03-14 05:58:12.779521332 +0000 UTC m=+1546.817782078" lastFinishedPulling="2026-03-14 05:58:22.860640165 +0000 UTC m=+1556.898900911" observedRunningTime="2026-03-14 05:58:23.572787797 +0000 UTC m=+1557.611048533" watchObservedRunningTime="2026-03-14 05:58:23.580399644 +0000 UTC m=+1557.618660390" Mar 14 05:58:23 crc kubenswrapper[4817]: I0314 05:58:23.658258 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.658239068 podStartE2EDuration="41.658239068s" podCreationTimestamp="2026-03-14 05:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 05:58:23.646638836 +0000 UTC m=+1557.684899582" watchObservedRunningTime="2026-03-14 05:58:23.658239068 +0000 UTC m=+1557.696499804" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.472026 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7pjlw"] Mar 14 05:58:30 crc kubenswrapper[4817]: E0314 05:58:30.473273 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="registry-server" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.473287 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="registry-server" Mar 14 05:58:30 crc kubenswrapper[4817]: E0314 05:58:30.473308 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="extract-utilities" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.473315 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="extract-utilities" Mar 14 05:58:30 crc kubenswrapper[4817]: E0314 05:58:30.473332 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="extract-content" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.473339 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="extract-content" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.473518 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7c277b-c601-41d3-9a76-5b5b13803d4f" containerName="registry-server" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.474883 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.491720 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pjlw"] Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.493245 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqcmt\" (UniqueName: \"kubernetes.io/projected/b1af4125-e414-4ab9-9c68-bc5772290073-kube-api-access-mqcmt\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.493328 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-utilities\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.493502 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-catalog-content\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.595430 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqcmt\" (UniqueName: \"kubernetes.io/projected/b1af4125-e414-4ab9-9c68-bc5772290073-kube-api-access-mqcmt\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.595865 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-utilities\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.596034 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-catalog-content\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.596384 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-utilities\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.596693 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-catalog-content\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.623178 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqcmt\" (UniqueName: \"kubernetes.io/projected/b1af4125-e414-4ab9-9c68-bc5772290073-kube-api-access-mqcmt\") pod \"certified-operators-7pjlw\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:30 crc kubenswrapper[4817]: I0314 05:58:30.835256 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:31 crc kubenswrapper[4817]: I0314 05:58:31.972848 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pjlw"] Mar 14 05:58:31 crc kubenswrapper[4817]: W0314 05:58:31.979977 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1af4125_e414_4ab9_9c68_bc5772290073.slice/crio-398c195e33167fa4b4adcc985d3d97b64c6989cfd2d5fd9652c08a7204fd29ba WatchSource:0}: Error finding container 398c195e33167fa4b4adcc985d3d97b64c6989cfd2d5fd9652c08a7204fd29ba: Status 404 returned error can't find the container with id 398c195e33167fa4b4adcc985d3d97b64c6989cfd2d5fd9652c08a7204fd29ba Mar 14 05:58:32 crc kubenswrapper[4817]: I0314 05:58:32.573092 4817 generic.go:334] "Generic (PLEG): container finished" podID="b1af4125-e414-4ab9-9c68-bc5772290073" containerID="8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b" exitCode=0 Mar 14 05:58:32 crc kubenswrapper[4817]: I0314 05:58:32.573197 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerDied","Data":"8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b"} Mar 14 05:58:32 crc kubenswrapper[4817]: I0314 05:58:32.573592 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerStarted","Data":"398c195e33167fa4b4adcc985d3d97b64c6989cfd2d5fd9652c08a7204fd29ba"} Mar 14 05:58:33 crc kubenswrapper[4817]: I0314 05:58:33.298152 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 14 05:58:33 crc kubenswrapper[4817]: I0314 05:58:33.589395 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerStarted","Data":"93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144"} Mar 14 05:58:34 crc kubenswrapper[4817]: I0314 05:58:34.602546 4817 generic.go:334] "Generic (PLEG): container finished" podID="e7119be8-fb6d-4bb9-8604-10d36abd643f" containerID="0cdbc84f97fc936be0e3367287c9fe3e0f333bb79c22911c6eca400b13cb236f" exitCode=0 Mar 14 05:58:34 crc kubenswrapper[4817]: I0314 05:58:34.602659 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" event={"ID":"e7119be8-fb6d-4bb9-8604-10d36abd643f","Type":"ContainerDied","Data":"0cdbc84f97fc936be0e3367287c9fe3e0f333bb79c22911c6eca400b13cb236f"} Mar 14 05:58:34 crc kubenswrapper[4817]: I0314 05:58:34.606740 4817 generic.go:334] "Generic (PLEG): container finished" podID="b1af4125-e414-4ab9-9c68-bc5772290073" containerID="93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144" exitCode=0 Mar 14 05:58:34 crc kubenswrapper[4817]: I0314 05:58:34.606793 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerDied","Data":"93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144"} Mar 14 05:58:35 crc kubenswrapper[4817]: I0314 05:58:35.624607 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerStarted","Data":"672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7"} Mar 14 05:58:35 crc kubenswrapper[4817]: I0314 05:58:35.659481 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7pjlw" podStartSLOduration=3.255826608 podStartE2EDuration="5.659406793s" podCreationTimestamp="2026-03-14 05:58:30 +0000 UTC" firstStartedPulling="2026-03-14 05:58:32.575223728 +0000 UTC m=+1566.613484474" lastFinishedPulling="2026-03-14 05:58:34.978803913 +0000 UTC m=+1569.017064659" observedRunningTime="2026-03-14 05:58:35.647007469 +0000 UTC m=+1569.685268265" watchObservedRunningTime="2026-03-14 05:58:35.659406793 +0000 UTC m=+1569.697667549" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.121648 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.272130 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-ssh-key-openstack-edpm-ipam\") pod \"e7119be8-fb6d-4bb9-8604-10d36abd643f\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.272290 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-inventory\") pod \"e7119be8-fb6d-4bb9-8604-10d36abd643f\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.272392 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-repo-setup-combined-ca-bundle\") pod \"e7119be8-fb6d-4bb9-8604-10d36abd643f\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.273740 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqmjd\" (UniqueName: \"kubernetes.io/projected/e7119be8-fb6d-4bb9-8604-10d36abd643f-kube-api-access-rqmjd\") pod \"e7119be8-fb6d-4bb9-8604-10d36abd643f\" (UID: \"e7119be8-fb6d-4bb9-8604-10d36abd643f\") " Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.280790 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7119be8-fb6d-4bb9-8604-10d36abd643f-kube-api-access-rqmjd" (OuterVolumeSpecName: "kube-api-access-rqmjd") pod "e7119be8-fb6d-4bb9-8604-10d36abd643f" (UID: "e7119be8-fb6d-4bb9-8604-10d36abd643f"). InnerVolumeSpecName "kube-api-access-rqmjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.281701 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e7119be8-fb6d-4bb9-8604-10d36abd643f" (UID: "e7119be8-fb6d-4bb9-8604-10d36abd643f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.311216 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-inventory" (OuterVolumeSpecName: "inventory") pod "e7119be8-fb6d-4bb9-8604-10d36abd643f" (UID: "e7119be8-fb6d-4bb9-8604-10d36abd643f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.311291 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e7119be8-fb6d-4bb9-8604-10d36abd643f" (UID: "e7119be8-fb6d-4bb9-8604-10d36abd643f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.377173 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.377751 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.377859 4817 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7119be8-fb6d-4bb9-8604-10d36abd643f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.377998 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqmjd\" (UniqueName: \"kubernetes.io/projected/e7119be8-fb6d-4bb9-8604-10d36abd643f-kube-api-access-rqmjd\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.636115 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.636181 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs" event={"ID":"e7119be8-fb6d-4bb9-8604-10d36abd643f","Type":"ContainerDied","Data":"3d0614b4ce31a4c52fb383b19afc03884cbf1699baeb5f8b327709bd31a5991c"} Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.636601 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d0614b4ce31a4c52fb383b19afc03884cbf1699baeb5f8b327709bd31a5991c" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.743877 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc"] Mar 14 05:58:36 crc kubenswrapper[4817]: E0314 05:58:36.744203 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7119be8-fb6d-4bb9-8604-10d36abd643f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.744218 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7119be8-fb6d-4bb9-8604-10d36abd643f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.744433 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7119be8-fb6d-4bb9-8604-10d36abd643f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.745162 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.747445 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.750181 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.750477 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.750740 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.765010 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc"] Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.887272 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9sls\" (UniqueName: \"kubernetes.io/projected/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-kube-api-access-s9sls\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.887345 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.887386 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.887488 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.990461 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9sls\" (UniqueName: \"kubernetes.io/projected/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-kube-api-access-s9sls\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.990532 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.990565 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:36 crc kubenswrapper[4817]: I0314 05:58:36.990596 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:37 crc kubenswrapper[4817]: I0314 05:58:37.000918 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:37 crc kubenswrapper[4817]: I0314 05:58:37.007501 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:37 crc kubenswrapper[4817]: I0314 05:58:37.011640 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:37 crc kubenswrapper[4817]: I0314 05:58:37.025637 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9sls\" (UniqueName: \"kubernetes.io/projected/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-kube-api-access-s9sls\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:37 crc kubenswrapper[4817]: I0314 05:58:37.081643 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 05:58:37 crc kubenswrapper[4817]: I0314 05:58:37.768378 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc"] Mar 14 05:58:38 crc kubenswrapper[4817]: I0314 05:58:38.658310 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" event={"ID":"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0","Type":"ContainerStarted","Data":"674e054e82a8e88a0b6aa731b4eae7f804c9dc4e2ad6c1f2e82fc15d2a7a00d8"} Mar 14 05:58:38 crc kubenswrapper[4817]: I0314 05:58:38.659259 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" event={"ID":"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0","Type":"ContainerStarted","Data":"60420f75fb1c531ba7c7f8c54725d226a68bae4c52c644ec9f4119ce0395bab7"} Mar 14 05:58:38 crc kubenswrapper[4817]: I0314 05:58:38.683248 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" podStartSLOduration=2.305271218 podStartE2EDuration="2.683228654s" podCreationTimestamp="2026-03-14 05:58:36 +0000 UTC" firstStartedPulling="2026-03-14 05:58:37.781102297 +0000 UTC m=+1571.819363043" lastFinishedPulling="2026-03-14 05:58:38.159059723 +0000 UTC m=+1572.197320479" observedRunningTime="2026-03-14 05:58:38.682184215 +0000 UTC m=+1572.720444961" watchObservedRunningTime="2026-03-14 05:58:38.683228654 +0000 UTC m=+1572.721489400" Mar 14 05:58:40 crc kubenswrapper[4817]: I0314 05:58:40.836340 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:40 crc kubenswrapper[4817]: I0314 05:58:40.836994 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:40 crc kubenswrapper[4817]: I0314 05:58:40.910827 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:41 crc kubenswrapper[4817]: I0314 05:58:41.318263 4817 scope.go:117] "RemoveContainer" containerID="d5576762a85178e95178c4ea78b9c9a133b609d5b87ee196d90ce59dac6e1bf5" Mar 14 05:58:41 crc kubenswrapper[4817]: I0314 05:58:41.581075 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 14 05:58:41 crc kubenswrapper[4817]: I0314 05:58:41.769035 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:41 crc kubenswrapper[4817]: I0314 05:58:41.847137 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pjlw"] Mar 14 05:58:43 crc kubenswrapper[4817]: I0314 05:58:43.719071 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7pjlw" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="registry-server" containerID="cri-o://672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7" gracePeriod=2 Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.248780 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.406735 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-catalog-content\") pod \"b1af4125-e414-4ab9-9c68-bc5772290073\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.407168 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-utilities\") pod \"b1af4125-e414-4ab9-9c68-bc5772290073\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.407254 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqcmt\" (UniqueName: \"kubernetes.io/projected/b1af4125-e414-4ab9-9c68-bc5772290073-kube-api-access-mqcmt\") pod \"b1af4125-e414-4ab9-9c68-bc5772290073\" (UID: \"b1af4125-e414-4ab9-9c68-bc5772290073\") " Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.409489 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-utilities" (OuterVolumeSpecName: "utilities") pod "b1af4125-e414-4ab9-9c68-bc5772290073" (UID: "b1af4125-e414-4ab9-9c68-bc5772290073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.419838 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1af4125-e414-4ab9-9c68-bc5772290073-kube-api-access-mqcmt" (OuterVolumeSpecName: "kube-api-access-mqcmt") pod "b1af4125-e414-4ab9-9c68-bc5772290073" (UID: "b1af4125-e414-4ab9-9c68-bc5772290073"). InnerVolumeSpecName "kube-api-access-mqcmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.510851 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.511484 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqcmt\" (UniqueName: \"kubernetes.io/projected/b1af4125-e414-4ab9-9c68-bc5772290073-kube-api-access-mqcmt\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.738766 4817 generic.go:334] "Generic (PLEG): container finished" podID="b1af4125-e414-4ab9-9c68-bc5772290073" containerID="672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7" exitCode=0 Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.738953 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjlw" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.749460 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerDied","Data":"672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7"} Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.749530 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjlw" event={"ID":"b1af4125-e414-4ab9-9c68-bc5772290073","Type":"ContainerDied","Data":"398c195e33167fa4b4adcc985d3d97b64c6989cfd2d5fd9652c08a7204fd29ba"} Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.749555 4817 scope.go:117] "RemoveContainer" containerID="672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.772708 4817 scope.go:117] "RemoveContainer" containerID="93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.803066 4817 scope.go:117] "RemoveContainer" containerID="8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.853835 4817 scope.go:117] "RemoveContainer" containerID="672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7" Mar 14 05:58:44 crc kubenswrapper[4817]: E0314 05:58:44.854469 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7\": container with ID starting with 672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7 not found: ID does not exist" containerID="672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.854521 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7"} err="failed to get container status \"672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7\": rpc error: code = NotFound desc = could not find container \"672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7\": container with ID starting with 672c13aa9f8000d436c94fd86e09208395daee845aa0e46dc24a5bb3f792cbc7 not found: ID does not exist" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.854551 4817 scope.go:117] "RemoveContainer" containerID="93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144" Mar 14 05:58:44 crc kubenswrapper[4817]: E0314 05:58:44.855188 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144\": container with ID starting with 93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144 not found: ID does not exist" containerID="93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.855241 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144"} err="failed to get container status \"93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144\": rpc error: code = NotFound desc = could not find container \"93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144\": container with ID starting with 93d37f2346bdb20a5883144a44a271481544d7f00682cf2104ac246c466eb144 not found: ID does not exist" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.855266 4817 scope.go:117] "RemoveContainer" containerID="8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b" Mar 14 05:58:44 crc kubenswrapper[4817]: E0314 05:58:44.856353 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b\": container with ID starting with 8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b not found: ID does not exist" containerID="8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b" Mar 14 05:58:44 crc kubenswrapper[4817]: I0314 05:58:44.856427 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b"} err="failed to get container status \"8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b\": rpc error: code = NotFound desc = could not find container \"8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b\": container with ID starting with 8ee42725134a7c2767688b929ccdc4a5045d08b49e65ba05fc9ddb466da3ed1b not found: ID does not exist" Mar 14 05:58:45 crc kubenswrapper[4817]: I0314 05:58:45.291086 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1af4125-e414-4ab9-9c68-bc5772290073" (UID: "b1af4125-e414-4ab9-9c68-bc5772290073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:58:45 crc kubenswrapper[4817]: I0314 05:58:45.331144 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af4125-e414-4ab9-9c68-bc5772290073-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:58:45 crc kubenswrapper[4817]: I0314 05:58:45.384025 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pjlw"] Mar 14 05:58:45 crc kubenswrapper[4817]: I0314 05:58:45.395429 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7pjlw"] Mar 14 05:58:46 crc kubenswrapper[4817]: I0314 05:58:46.762141 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" path="/var/lib/kubelet/pods/b1af4125-e414-4ab9-9c68-bc5772290073/volumes" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.285945 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-swvf5"] Mar 14 05:58:53 crc kubenswrapper[4817]: E0314 05:58:53.287095 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="registry-server" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.287112 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="registry-server" Mar 14 05:58:53 crc kubenswrapper[4817]: E0314 05:58:53.287141 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="extract-utilities" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.287150 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="extract-utilities" Mar 14 05:58:53 crc kubenswrapper[4817]: E0314 05:58:53.287158 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="extract-content" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.287166 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="extract-content" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.287445 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1af4125-e414-4ab9-9c68-bc5772290073" containerName="registry-server" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.289281 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.303541 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvf5"] Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.311459 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-catalog-content\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.311535 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-utilities\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.415018 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-catalog-content\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.415081 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-utilities\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.415125 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqb6\" (UniqueName: \"kubernetes.io/projected/8e6ee9fd-186f-441f-a05e-48a82ac371c5-kube-api-access-rfqb6\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.415783 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-catalog-content\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.416598 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-utilities\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.517021 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqb6\" (UniqueName: \"kubernetes.io/projected/8e6ee9fd-186f-441f-a05e-48a82ac371c5-kube-api-access-rfqb6\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.551487 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqb6\" (UniqueName: \"kubernetes.io/projected/8e6ee9fd-186f-441f-a05e-48a82ac371c5-kube-api-access-rfqb6\") pod \"redhat-marketplace-swvf5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:53 crc kubenswrapper[4817]: I0314 05:58:53.630414 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:58:54 crc kubenswrapper[4817]: I0314 05:58:54.189090 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvf5"] Mar 14 05:58:54 crc kubenswrapper[4817]: I0314 05:58:54.854823 4817 generic.go:334] "Generic (PLEG): container finished" podID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerID="aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053" exitCode=0 Mar 14 05:58:54 crc kubenswrapper[4817]: I0314 05:58:54.854895 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvf5" event={"ID":"8e6ee9fd-186f-441f-a05e-48a82ac371c5","Type":"ContainerDied","Data":"aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053"} Mar 14 05:58:54 crc kubenswrapper[4817]: I0314 05:58:54.854975 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvf5" event={"ID":"8e6ee9fd-186f-441f-a05e-48a82ac371c5","Type":"ContainerStarted","Data":"134893183e13e53822bcb9862c41d118c2b19957a61b918cbdc7685440f9b91c"} Mar 14 05:58:56 crc kubenswrapper[4817]: I0314 05:58:56.883895 4817 generic.go:334] "Generic (PLEG): container finished" podID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerID="57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b" exitCode=0 Mar 14 05:58:56 crc kubenswrapper[4817]: I0314 05:58:56.884031 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvf5" event={"ID":"8e6ee9fd-186f-441f-a05e-48a82ac371c5","Type":"ContainerDied","Data":"57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b"} Mar 14 05:58:57 crc kubenswrapper[4817]: I0314 05:58:57.900424 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvf5" event={"ID":"8e6ee9fd-186f-441f-a05e-48a82ac371c5","Type":"ContainerStarted","Data":"7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5"} Mar 14 05:58:57 crc kubenswrapper[4817]: I0314 05:58:57.929864 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-swvf5" podStartSLOduration=2.467560885 podStartE2EDuration="4.929843327s" podCreationTimestamp="2026-03-14 05:58:53 +0000 UTC" firstStartedPulling="2026-03-14 05:58:54.857473889 +0000 UTC m=+1588.895734655" lastFinishedPulling="2026-03-14 05:58:57.319756351 +0000 UTC m=+1591.358017097" observedRunningTime="2026-03-14 05:58:57.925536404 +0000 UTC m=+1591.963797150" watchObservedRunningTime="2026-03-14 05:58:57.929843327 +0000 UTC m=+1591.968104073" Mar 14 05:59:03 crc kubenswrapper[4817]: I0314 05:59:03.631873 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:59:03 crc kubenswrapper[4817]: I0314 05:59:03.633039 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:59:03 crc kubenswrapper[4817]: I0314 05:59:03.706421 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:59:04 crc kubenswrapper[4817]: I0314 05:59:04.050580 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:59:04 crc kubenswrapper[4817]: I0314 05:59:04.128871 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvf5"] Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.021329 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-swvf5" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="registry-server" containerID="cri-o://7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5" gracePeriod=2 Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.666228 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.814011 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-utilities\") pod \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.814358 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqb6\" (UniqueName: \"kubernetes.io/projected/8e6ee9fd-186f-441f-a05e-48a82ac371c5-kube-api-access-rfqb6\") pod \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.814403 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-catalog-content\") pod \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\" (UID: \"8e6ee9fd-186f-441f-a05e-48a82ac371c5\") " Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.814978 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-utilities" (OuterVolumeSpecName: "utilities") pod "8e6ee9fd-186f-441f-a05e-48a82ac371c5" (UID: "8e6ee9fd-186f-441f-a05e-48a82ac371c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.815535 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.821252 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6ee9fd-186f-441f-a05e-48a82ac371c5-kube-api-access-rfqb6" (OuterVolumeSpecName: "kube-api-access-rfqb6") pod "8e6ee9fd-186f-441f-a05e-48a82ac371c5" (UID: "8e6ee9fd-186f-441f-a05e-48a82ac371c5"). InnerVolumeSpecName "kube-api-access-rfqb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.842694 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e6ee9fd-186f-441f-a05e-48a82ac371c5" (UID: "8e6ee9fd-186f-441f-a05e-48a82ac371c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.916673 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqb6\" (UniqueName: \"kubernetes.io/projected/8e6ee9fd-186f-441f-a05e-48a82ac371c5-kube-api-access-rfqb6\") on node \"crc\" DevicePath \"\"" Mar 14 05:59:06 crc kubenswrapper[4817]: I0314 05:59:06.916713 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6ee9fd-186f-441f-a05e-48a82ac371c5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.036033 4817 generic.go:334] "Generic (PLEG): container finished" podID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerID="7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5" exitCode=0 Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.036087 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvf5" event={"ID":"8e6ee9fd-186f-441f-a05e-48a82ac371c5","Type":"ContainerDied","Data":"7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5"} Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.036123 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swvf5" event={"ID":"8e6ee9fd-186f-441f-a05e-48a82ac371c5","Type":"ContainerDied","Data":"134893183e13e53822bcb9862c41d118c2b19957a61b918cbdc7685440f9b91c"} Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.036148 4817 scope.go:117] "RemoveContainer" containerID="7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.038497 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swvf5" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.070343 4817 scope.go:117] "RemoveContainer" containerID="57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.096779 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvf5"] Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.112095 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-swvf5"] Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.115261 4817 scope.go:117] "RemoveContainer" containerID="aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.152485 4817 scope.go:117] "RemoveContainer" containerID="7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5" Mar 14 05:59:07 crc kubenswrapper[4817]: E0314 05:59:07.152950 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5\": container with ID starting with 7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5 not found: ID does not exist" containerID="7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.152989 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5"} err="failed to get container status \"7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5\": rpc error: code = NotFound desc = could not find container \"7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5\": container with ID starting with 7a71c3bef75f10e1246a2e31f2a1bb3e387327f2caa4f9a78cbdd20b50ced3b5 not found: ID does not exist" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.153011 4817 scope.go:117] "RemoveContainer" containerID="57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b" Mar 14 05:59:07 crc kubenswrapper[4817]: E0314 05:59:07.153392 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b\": container with ID starting with 57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b not found: ID does not exist" containerID="57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.153500 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b"} err="failed to get container status \"57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b\": rpc error: code = NotFound desc = could not find container \"57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b\": container with ID starting with 57012cecacad7e95ef365f1fc58c131f14b0bc303c91a17978f424066cae481b not found: ID does not exist" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.153585 4817 scope.go:117] "RemoveContainer" containerID="aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053" Mar 14 05:59:07 crc kubenswrapper[4817]: E0314 05:59:07.153993 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053\": container with ID starting with aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053 not found: ID does not exist" containerID="aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053" Mar 14 05:59:07 crc kubenswrapper[4817]: I0314 05:59:07.154029 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053"} err="failed to get container status \"aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053\": rpc error: code = NotFound desc = could not find container \"aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053\": container with ID starting with aed10025bce381fbccc4c07c2b710eefd3684f29567ce42fd148abfca4a53053 not found: ID does not exist" Mar 14 05:59:08 crc kubenswrapper[4817]: I0314 05:59:08.749959 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" path="/var/lib/kubelet/pods/8e6ee9fd-186f-441f-a05e-48a82ac371c5/volumes" Mar 14 05:59:38 crc kubenswrapper[4817]: I0314 05:59:38.565427 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 05:59:38 crc kubenswrapper[4817]: I0314 05:59:38.566553 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 05:59:41 crc kubenswrapper[4817]: I0314 05:59:41.468750 4817 scope.go:117] "RemoveContainer" containerID="d8d99ca6b8613994427f3e2d1769d7a5c531f5e64ad2a06a7e2177c95fcbc1a1" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.170575 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4"] Mar 14 06:00:00 crc kubenswrapper[4817]: E0314 06:00:00.174292 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="extract-utilities" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.174322 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="extract-utilities" Mar 14 06:00:00 crc kubenswrapper[4817]: E0314 06:00:00.174379 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="extract-content" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.174389 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="extract-content" Mar 14 06:00:00 crc kubenswrapper[4817]: E0314 06:00:00.174425 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="registry-server" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.174434 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="registry-server" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.174700 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6ee9fd-186f-441f-a05e-48a82ac371c5" containerName="registry-server" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.179271 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.182075 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.184517 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557800-ztdlq"] Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.186148 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.190294 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.190589 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.190633 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.191393 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.202025 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4"] Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.229029 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557800-ztdlq"] Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.267824 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65221d2c-5834-4b80-a1b6-a0240e6d77de-secret-volume\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.268121 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qldhz\" (UniqueName: \"kubernetes.io/projected/65221d2c-5834-4b80-a1b6-a0240e6d77de-kube-api-access-qldhz\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.268193 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65221d2c-5834-4b80-a1b6-a0240e6d77de-config-volume\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.370133 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65221d2c-5834-4b80-a1b6-a0240e6d77de-secret-volume\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.370564 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cn5\" (UniqueName: \"kubernetes.io/projected/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad-kube-api-access-s4cn5\") pod \"auto-csr-approver-29557800-ztdlq\" (UID: \"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad\") " pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.370647 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qldhz\" (UniqueName: \"kubernetes.io/projected/65221d2c-5834-4b80-a1b6-a0240e6d77de-kube-api-access-qldhz\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.371436 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65221d2c-5834-4b80-a1b6-a0240e6d77de-config-volume\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.373156 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65221d2c-5834-4b80-a1b6-a0240e6d77de-config-volume\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.380218 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65221d2c-5834-4b80-a1b6-a0240e6d77de-secret-volume\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.402621 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qldhz\" (UniqueName: \"kubernetes.io/projected/65221d2c-5834-4b80-a1b6-a0240e6d77de-kube-api-access-qldhz\") pod \"collect-profiles-29557800-fmxs4\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.475827 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4cn5\" (UniqueName: \"kubernetes.io/projected/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad-kube-api-access-s4cn5\") pod \"auto-csr-approver-29557800-ztdlq\" (UID: \"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad\") " pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.500558 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4cn5\" (UniqueName: \"kubernetes.io/projected/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad-kube-api-access-s4cn5\") pod \"auto-csr-approver-29557800-ztdlq\" (UID: \"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad\") " pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.514574 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:00 crc kubenswrapper[4817]: I0314 06:00:00.540299 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:01 crc kubenswrapper[4817]: I0314 06:00:01.046278 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557800-ztdlq"] Mar 14 06:00:01 crc kubenswrapper[4817]: I0314 06:00:01.057445 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4"] Mar 14 06:00:01 crc kubenswrapper[4817]: W0314 06:00:01.058722 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65221d2c_5834_4b80_a1b6_a0240e6d77de.slice/crio-68b9e36ba99a365666698466764d589b091951d5db2c4ef0fb303b9cc68eaf8c WatchSource:0}: Error finding container 68b9e36ba99a365666698466764d589b091951d5db2c4ef0fb303b9cc68eaf8c: Status 404 returned error can't find the container with id 68b9e36ba99a365666698466764d589b091951d5db2c4ef0fb303b9cc68eaf8c Mar 14 06:00:01 crc kubenswrapper[4817]: I0314 06:00:01.694798 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" event={"ID":"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad","Type":"ContainerStarted","Data":"4d36f7e25743b12fe0075c68eecbb2c3214703deb2d3b6487c4c69caf5e42377"} Mar 14 06:00:01 crc kubenswrapper[4817]: I0314 06:00:01.698022 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" event={"ID":"65221d2c-5834-4b80-a1b6-a0240e6d77de","Type":"ContainerStarted","Data":"f4ba8d4057882ebb041febf063553bb8dbaba7b15d058cd6598536c175f1b67d"} Mar 14 06:00:01 crc kubenswrapper[4817]: I0314 06:00:01.698098 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" event={"ID":"65221d2c-5834-4b80-a1b6-a0240e6d77de","Type":"ContainerStarted","Data":"68b9e36ba99a365666698466764d589b091951d5db2c4ef0fb303b9cc68eaf8c"} Mar 14 06:00:02 crc kubenswrapper[4817]: I0314 06:00:02.714388 4817 generic.go:334] "Generic (PLEG): container finished" podID="65221d2c-5834-4b80-a1b6-a0240e6d77de" containerID="f4ba8d4057882ebb041febf063553bb8dbaba7b15d058cd6598536c175f1b67d" exitCode=0 Mar 14 06:00:02 crc kubenswrapper[4817]: I0314 06:00:02.714548 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" event={"ID":"65221d2c-5834-4b80-a1b6-a0240e6d77de","Type":"ContainerDied","Data":"f4ba8d4057882ebb041febf063553bb8dbaba7b15d058cd6598536c175f1b67d"} Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.111198 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.262579 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65221d2c-5834-4b80-a1b6-a0240e6d77de-config-volume\") pod \"65221d2c-5834-4b80-a1b6-a0240e6d77de\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.262701 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qldhz\" (UniqueName: \"kubernetes.io/projected/65221d2c-5834-4b80-a1b6-a0240e6d77de-kube-api-access-qldhz\") pod \"65221d2c-5834-4b80-a1b6-a0240e6d77de\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.262982 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65221d2c-5834-4b80-a1b6-a0240e6d77de-secret-volume\") pod \"65221d2c-5834-4b80-a1b6-a0240e6d77de\" (UID: \"65221d2c-5834-4b80-a1b6-a0240e6d77de\") " Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.264129 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65221d2c-5834-4b80-a1b6-a0240e6d77de-config-volume" (OuterVolumeSpecName: "config-volume") pod "65221d2c-5834-4b80-a1b6-a0240e6d77de" (UID: "65221d2c-5834-4b80-a1b6-a0240e6d77de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.274305 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65221d2c-5834-4b80-a1b6-a0240e6d77de-kube-api-access-qldhz" (OuterVolumeSpecName: "kube-api-access-qldhz") pod "65221d2c-5834-4b80-a1b6-a0240e6d77de" (UID: "65221d2c-5834-4b80-a1b6-a0240e6d77de"). InnerVolumeSpecName "kube-api-access-qldhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.274425 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65221d2c-5834-4b80-a1b6-a0240e6d77de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65221d2c-5834-4b80-a1b6-a0240e6d77de" (UID: "65221d2c-5834-4b80-a1b6-a0240e6d77de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.365795 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65221d2c-5834-4b80-a1b6-a0240e6d77de-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.365853 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qldhz\" (UniqueName: \"kubernetes.io/projected/65221d2c-5834-4b80-a1b6-a0240e6d77de-kube-api-access-qldhz\") on node \"crc\" DevicePath \"\"" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.365875 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65221d2c-5834-4b80-a1b6-a0240e6d77de-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.748767 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.750754 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4" event={"ID":"65221d2c-5834-4b80-a1b6-a0240e6d77de","Type":"ContainerDied","Data":"68b9e36ba99a365666698466764d589b091951d5db2c4ef0fb303b9cc68eaf8c"} Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:04.750803 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b9e36ba99a365666698466764d589b091951d5db2c4ef0fb303b9cc68eaf8c" Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:05.761390 4817 generic.go:334] "Generic (PLEG): container finished" podID="ec34242c-dc4b-4feb-ae3b-16d3b01c48ad" containerID="9546a08e19439822eab3177f338af1d2658d21f4ce07a9beac5f8e6e796ca770" exitCode=0 Mar 14 06:00:05 crc kubenswrapper[4817]: I0314 06:00:05.761451 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" event={"ID":"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad","Type":"ContainerDied","Data":"9546a08e19439822eab3177f338af1d2658d21f4ce07a9beac5f8e6e796ca770"} Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.134937 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.229394 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4cn5\" (UniqueName: \"kubernetes.io/projected/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad-kube-api-access-s4cn5\") pod \"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad\" (UID: \"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad\") " Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.239775 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad-kube-api-access-s4cn5" (OuterVolumeSpecName: "kube-api-access-s4cn5") pod "ec34242c-dc4b-4feb-ae3b-16d3b01c48ad" (UID: "ec34242c-dc4b-4feb-ae3b-16d3b01c48ad"). InnerVolumeSpecName "kube-api-access-s4cn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.332536 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4cn5\" (UniqueName: \"kubernetes.io/projected/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad-kube-api-access-s4cn5\") on node \"crc\" DevicePath \"\"" Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.792642 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" event={"ID":"ec34242c-dc4b-4feb-ae3b-16d3b01c48ad","Type":"ContainerDied","Data":"4d36f7e25743b12fe0075c68eecbb2c3214703deb2d3b6487c4c69caf5e42377"} Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.793097 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d36f7e25743b12fe0075c68eecbb2c3214703deb2d3b6487c4c69caf5e42377" Mar 14 06:00:07 crc kubenswrapper[4817]: I0314 06:00:07.792716 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557800-ztdlq" Mar 14 06:00:08 crc kubenswrapper[4817]: I0314 06:00:08.230656 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557794-ztjmq"] Mar 14 06:00:08 crc kubenswrapper[4817]: I0314 06:00:08.245496 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557794-ztjmq"] Mar 14 06:00:08 crc kubenswrapper[4817]: I0314 06:00:08.566040 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:00:08 crc kubenswrapper[4817]: I0314 06:00:08.566131 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:00:08 crc kubenswrapper[4817]: I0314 06:00:08.744957 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3277bee-fe8a-4fa1-a005-cc16f69665b7" path="/var/lib/kubelet/pods/c3277bee-fe8a-4fa1-a005-cc16f69665b7/volumes" Mar 14 06:00:38 crc kubenswrapper[4817]: I0314 06:00:38.565952 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:00:38 crc kubenswrapper[4817]: I0314 06:00:38.566817 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:00:38 crc kubenswrapper[4817]: I0314 06:00:38.566867 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:00:38 crc kubenswrapper[4817]: I0314 06:00:38.567763 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:00:38 crc kubenswrapper[4817]: I0314 06:00:38.567834 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" gracePeriod=600 Mar 14 06:00:40 crc kubenswrapper[4817]: E0314 06:00:40.686006 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:00:41 crc kubenswrapper[4817]: I0314 06:00:41.160172 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" exitCode=0 Mar 14 06:00:41 crc kubenswrapper[4817]: I0314 06:00:41.160220 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2"} Mar 14 06:00:41 crc kubenswrapper[4817]: I0314 06:00:41.160262 4817 scope.go:117] "RemoveContainer" containerID="8509fe209e0d6b6f9ac3932c37444f6f3b02987e960352c4af8c1492f53dab1b" Mar 14 06:00:41 crc kubenswrapper[4817]: I0314 06:00:41.160665 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:00:41 crc kubenswrapper[4817]: E0314 06:00:41.161108 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:00:41 crc kubenswrapper[4817]: I0314 06:00:41.586277 4817 scope.go:117] "RemoveContainer" containerID="32fa62abf72f73eceea4d7c9ad5e88badb95adf335d50f4b69180e1d166ba8d9" Mar 14 06:00:53 crc kubenswrapper[4817]: I0314 06:00:53.732997 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:00:53 crc kubenswrapper[4817]: E0314 06:00:53.734126 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.175232 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29557801-4vr2l"] Mar 14 06:01:00 crc kubenswrapper[4817]: E0314 06:01:00.176886 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec34242c-dc4b-4feb-ae3b-16d3b01c48ad" containerName="oc" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.177024 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec34242c-dc4b-4feb-ae3b-16d3b01c48ad" containerName="oc" Mar 14 06:01:00 crc kubenswrapper[4817]: E0314 06:01:00.177061 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65221d2c-5834-4b80-a1b6-a0240e6d77de" containerName="collect-profiles" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.177070 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="65221d2c-5834-4b80-a1b6-a0240e6d77de" containerName="collect-profiles" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.177303 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec34242c-dc4b-4feb-ae3b-16d3b01c48ad" containerName="oc" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.177333 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="65221d2c-5834-4b80-a1b6-a0240e6d77de" containerName="collect-profiles" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.188467 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.213966 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557801-4vr2l"] Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.314371 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-combined-ca-bundle\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.314419 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-fernet-keys\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.314499 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-config-data\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.315009 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9jn\" (UniqueName: \"kubernetes.io/projected/3f78210b-4544-4dc2-8e6e-a873af162323-kube-api-access-qn9jn\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.417379 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-combined-ca-bundle\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.417429 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-fernet-keys\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.417515 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-config-data\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.417556 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn9jn\" (UniqueName: \"kubernetes.io/projected/3f78210b-4544-4dc2-8e6e-a873af162323-kube-api-access-qn9jn\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.437953 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-combined-ca-bundle\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.438148 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-config-data\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.439667 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-fernet-keys\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.441242 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn9jn\" (UniqueName: \"kubernetes.io/projected/3f78210b-4544-4dc2-8e6e-a873af162323-kube-api-access-qn9jn\") pod \"keystone-cron-29557801-4vr2l\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:00 crc kubenswrapper[4817]: I0314 06:01:00.516394 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:01 crc kubenswrapper[4817]: I0314 06:01:01.037346 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557801-4vr2l"] Mar 14 06:01:01 crc kubenswrapper[4817]: I0314 06:01:01.383641 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557801-4vr2l" event={"ID":"3f78210b-4544-4dc2-8e6e-a873af162323","Type":"ContainerStarted","Data":"145120f48f35d6eadf5c576212acc593ffb825527ecf0d07aa10216a29335b45"} Mar 14 06:01:01 crc kubenswrapper[4817]: I0314 06:01:01.384196 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557801-4vr2l" event={"ID":"3f78210b-4544-4dc2-8e6e-a873af162323","Type":"ContainerStarted","Data":"0a6b32ea6c0985074868b55258625e8c119d65e888eebc46109d914caf1445fd"} Mar 14 06:01:01 crc kubenswrapper[4817]: I0314 06:01:01.406483 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29557801-4vr2l" podStartSLOduration=1.4064600440000001 podStartE2EDuration="1.406460044s" podCreationTimestamp="2026-03-14 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:01:01.405786195 +0000 UTC m=+1715.444046941" watchObservedRunningTime="2026-03-14 06:01:01.406460044 +0000 UTC m=+1715.444720790" Mar 14 06:01:04 crc kubenswrapper[4817]: I0314 06:01:04.422842 4817 generic.go:334] "Generic (PLEG): container finished" podID="3f78210b-4544-4dc2-8e6e-a873af162323" containerID="145120f48f35d6eadf5c576212acc593ffb825527ecf0d07aa10216a29335b45" exitCode=0 Mar 14 06:01:04 crc kubenswrapper[4817]: I0314 06:01:04.422949 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557801-4vr2l" event={"ID":"3f78210b-4544-4dc2-8e6e-a873af162323","Type":"ContainerDied","Data":"145120f48f35d6eadf5c576212acc593ffb825527ecf0d07aa10216a29335b45"} Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.812490 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.953785 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-fernet-keys\") pod \"3f78210b-4544-4dc2-8e6e-a873af162323\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.953981 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn9jn\" (UniqueName: \"kubernetes.io/projected/3f78210b-4544-4dc2-8e6e-a873af162323-kube-api-access-qn9jn\") pod \"3f78210b-4544-4dc2-8e6e-a873af162323\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.954120 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-combined-ca-bundle\") pod \"3f78210b-4544-4dc2-8e6e-a873af162323\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.954269 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-config-data\") pod \"3f78210b-4544-4dc2-8e6e-a873af162323\" (UID: \"3f78210b-4544-4dc2-8e6e-a873af162323\") " Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.962225 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f78210b-4544-4dc2-8e6e-a873af162323-kube-api-access-qn9jn" (OuterVolumeSpecName: "kube-api-access-qn9jn") pod "3f78210b-4544-4dc2-8e6e-a873af162323" (UID: "3f78210b-4544-4dc2-8e6e-a873af162323"). InnerVolumeSpecName "kube-api-access-qn9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.966807 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3f78210b-4544-4dc2-8e6e-a873af162323" (UID: "3f78210b-4544-4dc2-8e6e-a873af162323"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:01:05 crc kubenswrapper[4817]: I0314 06:01:05.988419 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f78210b-4544-4dc2-8e6e-a873af162323" (UID: "3f78210b-4544-4dc2-8e6e-a873af162323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.015775 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-config-data" (OuterVolumeSpecName: "config-data") pod "3f78210b-4544-4dc2-8e6e-a873af162323" (UID: "3f78210b-4544-4dc2-8e6e-a873af162323"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.056206 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn9jn\" (UniqueName: \"kubernetes.io/projected/3f78210b-4544-4dc2-8e6e-a873af162323-kube-api-access-qn9jn\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.056249 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.056262 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.056273 4817 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f78210b-4544-4dc2-8e6e-a873af162323-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.443684 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557801-4vr2l" event={"ID":"3f78210b-4544-4dc2-8e6e-a873af162323","Type":"ContainerDied","Data":"0a6b32ea6c0985074868b55258625e8c119d65e888eebc46109d914caf1445fd"} Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.444300 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a6b32ea6c0985074868b55258625e8c119d65e888eebc46109d914caf1445fd" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.443737 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557801-4vr2l" Mar 14 06:01:06 crc kubenswrapper[4817]: I0314 06:01:06.744744 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:01:06 crc kubenswrapper[4817]: E0314 06:01:06.745146 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.084668 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gbmkl"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.099567 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4b54-account-create-update-xnzhj"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.113910 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fgpxh"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.123067 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-18c9-account-create-update-kdzmx"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.134365 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fgpxh"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.146663 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-040f-account-create-update-4vdv7"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.158028 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gbmkl"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.168130 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-99d9s"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.176673 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4b54-account-create-update-xnzhj"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.185472 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-99d9s"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.195056 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-040f-account-create-update-4vdv7"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.204285 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-18c9-account-create-update-kdzmx"] Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.751832 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be6610c-71c9-4f99-8b48-ff76fd651942" path="/var/lib/kubelet/pods/0be6610c-71c9-4f99-8b48-ff76fd651942/volumes" Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.753203 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1b7f4d-c285-477d-87aa-2d2b3be6053d" path="/var/lib/kubelet/pods/1b1b7f4d-c285-477d-87aa-2d2b3be6053d/volumes" Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.754605 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2900aee5-7edc-459c-a5ef-18b08d486d32" path="/var/lib/kubelet/pods/2900aee5-7edc-459c-a5ef-18b08d486d32/volumes" Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.756050 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f26155-aa88-49fa-a106-542d5b2e0cbb" path="/var/lib/kubelet/pods/55f26155-aa88-49fa-a106-542d5b2e0cbb/volumes" Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.758830 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db5d912-5a84-436a-a35c-82d4af03f6e5" path="/var/lib/kubelet/pods/6db5d912-5a84-436a-a35c-82d4af03f6e5/volumes" Mar 14 06:01:10 crc kubenswrapper[4817]: I0314 06:01:10.760040 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d078d0d8-b1c0-45c0-b578-772b6bb6350c" path="/var/lib/kubelet/pods/d078d0d8-b1c0-45c0-b578-772b6bb6350c/volumes" Mar 14 06:01:20 crc kubenswrapper[4817]: I0314 06:01:20.732604 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:01:20 crc kubenswrapper[4817]: E0314 06:01:20.735297 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:01:30 crc kubenswrapper[4817]: I0314 06:01:30.037854 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gp8wg"] Mar 14 06:01:30 crc kubenswrapper[4817]: I0314 06:01:30.050815 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gp8wg"] Mar 14 06:01:30 crc kubenswrapper[4817]: I0314 06:01:30.758146 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d16d505-b626-4d62-8688-688edc7182c2" path="/var/lib/kubelet/pods/7d16d505-b626-4d62-8688-688edc7182c2/volumes" Mar 14 06:01:31 crc kubenswrapper[4817]: I0314 06:01:31.705373 4817 generic.go:334] "Generic (PLEG): container finished" podID="a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" containerID="674e054e82a8e88a0b6aa731b4eae7f804c9dc4e2ad6c1f2e82fc15d2a7a00d8" exitCode=0 Mar 14 06:01:31 crc kubenswrapper[4817]: I0314 06:01:31.705434 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" event={"ID":"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0","Type":"ContainerDied","Data":"674e054e82a8e88a0b6aa731b4eae7f804c9dc4e2ad6c1f2e82fc15d2a7a00d8"} Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.137860 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.283963 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-ssh-key-openstack-edpm-ipam\") pod \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.284091 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9sls\" (UniqueName: \"kubernetes.io/projected/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-kube-api-access-s9sls\") pod \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.284158 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-bootstrap-combined-ca-bundle\") pod \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.284367 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-inventory\") pod \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\" (UID: \"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0\") " Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.294232 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" (UID: "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.300836 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-kube-api-access-s9sls" (OuterVolumeSpecName: "kube-api-access-s9sls") pod "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" (UID: "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0"). InnerVolumeSpecName "kube-api-access-s9sls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.318156 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" (UID: "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.319489 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-inventory" (OuterVolumeSpecName: "inventory") pod "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" (UID: "a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.390911 4817 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.390958 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.390972 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.390981 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9sls\" (UniqueName: \"kubernetes.io/projected/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0-kube-api-access-s9sls\") on node \"crc\" DevicePath \"\"" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.731187 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" event={"ID":"a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0","Type":"ContainerDied","Data":"60420f75fb1c531ba7c7f8c54725d226a68bae4c52c644ec9f4119ce0395bab7"} Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.731247 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60420f75fb1c531ba7c7f8c54725d226a68bae4c52c644ec9f4119ce0395bab7" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.731250 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.815338 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz"] Mar 14 06:01:33 crc kubenswrapper[4817]: E0314 06:01:33.815862 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f78210b-4544-4dc2-8e6e-a873af162323" containerName="keystone-cron" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.815892 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f78210b-4544-4dc2-8e6e-a873af162323" containerName="keystone-cron" Mar 14 06:01:33 crc kubenswrapper[4817]: E0314 06:01:33.815962 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.815976 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.816184 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.816213 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f78210b-4544-4dc2-8e6e-a873af162323" containerName="keystone-cron" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.817397 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.819689 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.819986 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.824313 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.824348 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:01:33 crc kubenswrapper[4817]: I0314 06:01:33.832171 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz"] Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.003370 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjt5\" (UniqueName: \"kubernetes.io/projected/03d81081-d17c-4d84-8c1c-c7a41cf680be-kube-api-access-9jjt5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.003446 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.003534 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.105385 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jjt5\" (UniqueName: \"kubernetes.io/projected/03d81081-d17c-4d84-8c1c-c7a41cf680be-kube-api-access-9jjt5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.105474 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.105562 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.110555 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.113537 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.123271 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jjt5\" (UniqueName: \"kubernetes.io/projected/03d81081-d17c-4d84-8c1c-c7a41cf680be-kube-api-access-9jjt5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.150945 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.688657 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz"] Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.691676 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:01:34 crc kubenswrapper[4817]: I0314 06:01:34.742931 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" event={"ID":"03d81081-d17c-4d84-8c1c-c7a41cf680be","Type":"ContainerStarted","Data":"0816ada017328dbf3ca66541f4d99167a48662aa717406b29809fc7bd736f6be"} Mar 14 06:01:35 crc kubenswrapper[4817]: I0314 06:01:35.041704 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zzk9h"] Mar 14 06:01:35 crc kubenswrapper[4817]: I0314 06:01:35.056730 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zzk9h"] Mar 14 06:01:35 crc kubenswrapper[4817]: I0314 06:01:35.733045 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:01:35 crc kubenswrapper[4817]: E0314 06:01:35.735151 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:01:35 crc kubenswrapper[4817]: I0314 06:01:35.751040 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" event={"ID":"03d81081-d17c-4d84-8c1c-c7a41cf680be","Type":"ContainerStarted","Data":"0a9f79e67c822fc44510cec292a4557c4c06aae94b541f5eb08fa031e968c9e9"} Mar 14 06:01:35 crc kubenswrapper[4817]: I0314 06:01:35.780015 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" podStartSLOduration=2.290145817 podStartE2EDuration="2.779996958s" podCreationTimestamp="2026-03-14 06:01:33 +0000 UTC" firstStartedPulling="2026-03-14 06:01:34.691388243 +0000 UTC m=+1748.729648989" lastFinishedPulling="2026-03-14 06:01:35.181239384 +0000 UTC m=+1749.219500130" observedRunningTime="2026-03-14 06:01:35.773491122 +0000 UTC m=+1749.811751868" watchObservedRunningTime="2026-03-14 06:01:35.779996958 +0000 UTC m=+1749.818257704" Mar 14 06:01:36 crc kubenswrapper[4817]: I0314 06:01:36.771080 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081334e9-833f-4a52-893e-29c7ac2241ac" path="/var/lib/kubelet/pods/081334e9-833f-4a52-893e-29c7ac2241ac/volumes" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.677972 4817 scope.go:117] "RemoveContainer" containerID="64bb8de80cfda8b9b7c8f28e9f5e5d982bad7ca9a3e525abd87dbd41ca70a902" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.717152 4817 scope.go:117] "RemoveContainer" containerID="46ae8e5162f6de03729b64304ff13a04a954dafe9262a20b47af1bb832126392" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.771223 4817 scope.go:117] "RemoveContainer" containerID="bd298c7ff71b39f1828c09c2c876b02a5f5c00a037eaaf83d563e40280ca952a" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.822922 4817 scope.go:117] "RemoveContainer" containerID="5389ddca85e27077ddc0d56774f08f83d3b92a3e993de7976097a33d774fa1de" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.865134 4817 scope.go:117] "RemoveContainer" containerID="df93c48553a8c805e604e276073023d87222e8762e527129e8b005ee18d65e04" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.913584 4817 scope.go:117] "RemoveContainer" containerID="30c1861aa9d526d0408f342cdf0148e81a751197ef24a6eaa174ce5d5a6d2f39" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.960495 4817 scope.go:117] "RemoveContainer" containerID="d788d696236a365aa9c20c9edb2084f19339d4599e97d1748ebba798b5a8b3e6" Mar 14 06:01:41 crc kubenswrapper[4817]: I0314 06:01:41.982653 4817 scope.go:117] "RemoveContainer" containerID="c80b48c2fb2414ea6566f071e90ddeaa2e5555fbd1a4adafcca10b4d8e8ffb0b" Mar 14 06:01:48 crc kubenswrapper[4817]: I0314 06:01:48.052390 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-97754"] Mar 14 06:01:48 crc kubenswrapper[4817]: I0314 06:01:48.061751 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-345b-account-create-update-7ww2n"] Mar 14 06:01:48 crc kubenswrapper[4817]: I0314 06:01:48.073777 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-97754"] Mar 14 06:01:48 crc kubenswrapper[4817]: I0314 06:01:48.081229 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-345b-account-create-update-7ww2n"] Mar 14 06:01:48 crc kubenswrapper[4817]: I0314 06:01:48.743194 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04aca68c-f19d-46d3-a950-92b0c5aec127" path="/var/lib/kubelet/pods/04aca68c-f19d-46d3-a950-92b0c5aec127/volumes" Mar 14 06:01:48 crc kubenswrapper[4817]: I0314 06:01:48.743942 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485" path="/var/lib/kubelet/pods/ce266d6e-a44d-41d8-ba3d-b1a4f8b3c485/volumes" Mar 14 06:01:49 crc kubenswrapper[4817]: I0314 06:01:49.034072 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-t7fj5"] Mar 14 06:01:49 crc kubenswrapper[4817]: I0314 06:01:49.044278 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-t7fj5"] Mar 14 06:01:49 crc kubenswrapper[4817]: I0314 06:01:49.732471 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:01:49 crc kubenswrapper[4817]: E0314 06:01:49.732793 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:01:50 crc kubenswrapper[4817]: I0314 06:01:50.744223 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8099b872-793e-4e42-816d-a6ca9b72624c" path="/var/lib/kubelet/pods/8099b872-793e-4e42-816d-a6ca9b72624c/volumes" Mar 14 06:01:57 crc kubenswrapper[4817]: I0314 06:01:57.041957 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-chbsl"] Mar 14 06:01:57 crc kubenswrapper[4817]: I0314 06:01:57.057886 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3a58-account-create-update-qt2nl"] Mar 14 06:01:57 crc kubenswrapper[4817]: I0314 06:01:57.073219 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d4d1-account-create-update-4v66d"] Mar 14 06:01:57 crc kubenswrapper[4817]: I0314 06:01:57.085013 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-chbsl"] Mar 14 06:01:57 crc kubenswrapper[4817]: I0314 06:01:57.096824 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d4d1-account-create-update-4v66d"] Mar 14 06:01:57 crc kubenswrapper[4817]: I0314 06:01:57.106010 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3a58-account-create-update-qt2nl"] Mar 14 06:01:58 crc kubenswrapper[4817]: I0314 06:01:58.744552 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcb7120-12b3-4b62-9da4-e8c44e8a3567" path="/var/lib/kubelet/pods/1bcb7120-12b3-4b62-9da4-e8c44e8a3567/volumes" Mar 14 06:01:58 crc kubenswrapper[4817]: I0314 06:01:58.746170 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c36c132-bd25-4067-af91-a4ae33514875" path="/var/lib/kubelet/pods/8c36c132-bd25-4067-af91-a4ae33514875/volumes" Mar 14 06:01:58 crc kubenswrapper[4817]: I0314 06:01:58.746829 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd8f446-ad9f-42b4-8709-4c5ca19a69b5" path="/var/lib/kubelet/pods/bcd8f446-ad9f-42b4-8709-4c5ca19a69b5/volumes" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.161060 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557802-7s5sg"] Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.163668 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.174387 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557802-7s5sg"] Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.183429 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.183883 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.184037 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.283806 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bwlb\" (UniqueName: \"kubernetes.io/projected/f9d97304-d6ea-4fc8-a0f4-d952e4748476-kube-api-access-2bwlb\") pod \"auto-csr-approver-29557802-7s5sg\" (UID: \"f9d97304-d6ea-4fc8-a0f4-d952e4748476\") " pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.385975 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bwlb\" (UniqueName: \"kubernetes.io/projected/f9d97304-d6ea-4fc8-a0f4-d952e4748476-kube-api-access-2bwlb\") pod \"auto-csr-approver-29557802-7s5sg\" (UID: \"f9d97304-d6ea-4fc8-a0f4-d952e4748476\") " pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.415403 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bwlb\" (UniqueName: \"kubernetes.io/projected/f9d97304-d6ea-4fc8-a0f4-d952e4748476-kube-api-access-2bwlb\") pod \"auto-csr-approver-29557802-7s5sg\" (UID: \"f9d97304-d6ea-4fc8-a0f4-d952e4748476\") " pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.518981 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.733089 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:02:00 crc kubenswrapper[4817]: E0314 06:02:00.733829 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:02:00 crc kubenswrapper[4817]: I0314 06:02:00.996053 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557802-7s5sg"] Mar 14 06:02:01 crc kubenswrapper[4817]: I0314 06:02:01.090298 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" event={"ID":"f9d97304-d6ea-4fc8-a0f4-d952e4748476","Type":"ContainerStarted","Data":"3b894f625d4aec851486fa130e074403c2153707f7e12ca0cedf784f4cd4a574"} Mar 14 06:02:03 crc kubenswrapper[4817]: I0314 06:02:03.113049 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" event={"ID":"f9d97304-d6ea-4fc8-a0f4-d952e4748476","Type":"ContainerStarted","Data":"3506ac538c8389e40884db0a6112f4fee5e6ebc98588b0b8f1e1f126b6992b08"} Mar 14 06:02:03 crc kubenswrapper[4817]: I0314 06:02:03.139043 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" podStartSLOduration=1.515438753 podStartE2EDuration="3.139022874s" podCreationTimestamp="2026-03-14 06:02:00 +0000 UTC" firstStartedPulling="2026-03-14 06:02:01.004002068 +0000 UTC m=+1775.042262814" lastFinishedPulling="2026-03-14 06:02:02.627586189 +0000 UTC m=+1776.665846935" observedRunningTime="2026-03-14 06:02:03.13293297 +0000 UTC m=+1777.171193716" watchObservedRunningTime="2026-03-14 06:02:03.139022874 +0000 UTC m=+1777.177283620" Mar 14 06:02:04 crc kubenswrapper[4817]: I0314 06:02:04.122596 4817 generic.go:334] "Generic (PLEG): container finished" podID="f9d97304-d6ea-4fc8-a0f4-d952e4748476" containerID="3506ac538c8389e40884db0a6112f4fee5e6ebc98588b0b8f1e1f126b6992b08" exitCode=0 Mar 14 06:02:04 crc kubenswrapper[4817]: I0314 06:02:04.122647 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" event={"ID":"f9d97304-d6ea-4fc8-a0f4-d952e4748476","Type":"ContainerDied","Data":"3506ac538c8389e40884db0a6112f4fee5e6ebc98588b0b8f1e1f126b6992b08"} Mar 14 06:02:05 crc kubenswrapper[4817]: I0314 06:02:05.526757 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:05 crc kubenswrapper[4817]: I0314 06:02:05.703812 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bwlb\" (UniqueName: \"kubernetes.io/projected/f9d97304-d6ea-4fc8-a0f4-d952e4748476-kube-api-access-2bwlb\") pod \"f9d97304-d6ea-4fc8-a0f4-d952e4748476\" (UID: \"f9d97304-d6ea-4fc8-a0f4-d952e4748476\") " Mar 14 06:02:05 crc kubenswrapper[4817]: I0314 06:02:05.710999 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d97304-d6ea-4fc8-a0f4-d952e4748476-kube-api-access-2bwlb" (OuterVolumeSpecName: "kube-api-access-2bwlb") pod "f9d97304-d6ea-4fc8-a0f4-d952e4748476" (UID: "f9d97304-d6ea-4fc8-a0f4-d952e4748476"). InnerVolumeSpecName "kube-api-access-2bwlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:02:05 crc kubenswrapper[4817]: I0314 06:02:05.806002 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bwlb\" (UniqueName: \"kubernetes.io/projected/f9d97304-d6ea-4fc8-a0f4-d952e4748476-kube-api-access-2bwlb\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:06 crc kubenswrapper[4817]: I0314 06:02:06.148542 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" event={"ID":"f9d97304-d6ea-4fc8-a0f4-d952e4748476","Type":"ContainerDied","Data":"3b894f625d4aec851486fa130e074403c2153707f7e12ca0cedf784f4cd4a574"} Mar 14 06:02:06 crc kubenswrapper[4817]: I0314 06:02:06.148585 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b894f625d4aec851486fa130e074403c2153707f7e12ca0cedf784f4cd4a574" Mar 14 06:02:06 crc kubenswrapper[4817]: I0314 06:02:06.148740 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557802-7s5sg" Mar 14 06:02:06 crc kubenswrapper[4817]: I0314 06:02:06.210151 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557796-dcrbk"] Mar 14 06:02:06 crc kubenswrapper[4817]: I0314 06:02:06.222497 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557796-dcrbk"] Mar 14 06:02:06 crc kubenswrapper[4817]: I0314 06:02:06.743410 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5689dfe-d803-4814-90cd-c8f530df370c" path="/var/lib/kubelet/pods/d5689dfe-d803-4814-90cd-c8f530df370c/volumes" Mar 14 06:02:13 crc kubenswrapper[4817]: I0314 06:02:13.732925 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:02:13 crc kubenswrapper[4817]: E0314 06:02:13.734269 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:02:25 crc kubenswrapper[4817]: I0314 06:02:25.733163 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:02:25 crc kubenswrapper[4817]: E0314 06:02:25.734623 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:02:38 crc kubenswrapper[4817]: I0314 06:02:38.732108 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:02:38 crc kubenswrapper[4817]: E0314 06:02:38.733253 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:02:41 crc kubenswrapper[4817]: I0314 06:02:41.067463 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-856gn"] Mar 14 06:02:41 crc kubenswrapper[4817]: I0314 06:02:41.082648 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-856gn"] Mar 14 06:02:41 crc kubenswrapper[4817]: I0314 06:02:41.525502 4817 generic.go:334] "Generic (PLEG): container finished" podID="03d81081-d17c-4d84-8c1c-c7a41cf680be" containerID="0a9f79e67c822fc44510cec292a4557c4c06aae94b541f5eb08fa031e968c9e9" exitCode=0 Mar 14 06:02:41 crc kubenswrapper[4817]: I0314 06:02:41.525564 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" event={"ID":"03d81081-d17c-4d84-8c1c-c7a41cf680be","Type":"ContainerDied","Data":"0a9f79e67c822fc44510cec292a4557c4c06aae94b541f5eb08fa031e968c9e9"} Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.142784 4817 scope.go:117] "RemoveContainer" containerID="bc669cc178cb9c4bd6993f56f76afd2250667e26e9cd839910ee396d6fafe5d2" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.174167 4817 scope.go:117] "RemoveContainer" containerID="383726b63b64c5e1658e6a9df7c5948c3e9f28a42462512ab694bfd7aaffc552" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.242478 4817 scope.go:117] "RemoveContainer" containerID="40eca744f5a6d9c042b7ec994be3afb3d2943a8e926895a0e56dff7a602e113b" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.265053 4817 scope.go:117] "RemoveContainer" containerID="21c23c877d2a3de92546fed1dab823be773e2add6cf5c304f62c43ef13aa6b0b" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.312654 4817 scope.go:117] "RemoveContainer" containerID="969a12ead2d02c5597229525ea242f021b097283c3fbfa27a9dfa306f2429620" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.351006 4817 scope.go:117] "RemoveContainer" containerID="884666666e547801c7d8c3bb822ca24423ddc55626c3fe4332777f97654af540" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.382846 4817 scope.go:117] "RemoveContainer" containerID="2491350498f8e956dae276a2e6d4e8465e2a798b7f2bef83a15ccec4c2023dda" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.418731 4817 scope.go:117] "RemoveContainer" containerID="84fbcd97ed817f46ddfada6eb1dda57fd18ae6f3fa71dede60be4b3c90117508" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.444208 4817 scope.go:117] "RemoveContainer" containerID="9ef359b71ca51968e7558a2fd802642b825278b6ae0dc00f487d31f2953ace70" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.742945 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b59299-5f4c-41d2-ad05-2d43b0c0cbfb" path="/var/lib/kubelet/pods/09b59299-5f4c-41d2-ad05-2d43b0c0cbfb/volumes" Mar 14 06:02:42 crc kubenswrapper[4817]: I0314 06:02:42.945584 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.018492 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-ssh-key-openstack-edpm-ipam\") pod \"03d81081-d17c-4d84-8c1c-c7a41cf680be\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.019115 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-inventory\") pod \"03d81081-d17c-4d84-8c1c-c7a41cf680be\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.019286 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jjt5\" (UniqueName: \"kubernetes.io/projected/03d81081-d17c-4d84-8c1c-c7a41cf680be-kube-api-access-9jjt5\") pod \"03d81081-d17c-4d84-8c1c-c7a41cf680be\" (UID: \"03d81081-d17c-4d84-8c1c-c7a41cf680be\") " Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.025718 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d81081-d17c-4d84-8c1c-c7a41cf680be-kube-api-access-9jjt5" (OuterVolumeSpecName: "kube-api-access-9jjt5") pod "03d81081-d17c-4d84-8c1c-c7a41cf680be" (UID: "03d81081-d17c-4d84-8c1c-c7a41cf680be"). InnerVolumeSpecName "kube-api-access-9jjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.047160 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03d81081-d17c-4d84-8c1c-c7a41cf680be" (UID: "03d81081-d17c-4d84-8c1c-c7a41cf680be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.048160 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-inventory" (OuterVolumeSpecName: "inventory") pod "03d81081-d17c-4d84-8c1c-c7a41cf680be" (UID: "03d81081-d17c-4d84-8c1c-c7a41cf680be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.122002 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jjt5\" (UniqueName: \"kubernetes.io/projected/03d81081-d17c-4d84-8c1c-c7a41cf680be-kube-api-access-9jjt5\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.122295 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.122352 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03d81081-d17c-4d84-8c1c-c7a41cf680be-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.563300 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" event={"ID":"03d81081-d17c-4d84-8c1c-c7a41cf680be","Type":"ContainerDied","Data":"0816ada017328dbf3ca66541f4d99167a48662aa717406b29809fc7bd736f6be"} Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.563386 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0816ada017328dbf3ca66541f4d99167a48662aa717406b29809fc7bd736f6be" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.563504 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.696492 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf"] Mar 14 06:02:43 crc kubenswrapper[4817]: E0314 06:02:43.696881 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d81081-d17c-4d84-8c1c-c7a41cf680be" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.696917 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d81081-d17c-4d84-8c1c-c7a41cf680be" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:02:43 crc kubenswrapper[4817]: E0314 06:02:43.696953 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d97304-d6ea-4fc8-a0f4-d952e4748476" containerName="oc" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.696959 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d97304-d6ea-4fc8-a0f4-d952e4748476" containerName="oc" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.697119 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d81081-d17c-4d84-8c1c-c7a41cf680be" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.697141 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d97304-d6ea-4fc8-a0f4-d952e4748476" containerName="oc" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.697820 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.704099 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.704347 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.704365 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.704534 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.709988 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf"] Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.835603 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6j4\" (UniqueName: \"kubernetes.io/projected/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-kube-api-access-nn6j4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.835872 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.836364 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.938617 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.938772 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6j4\" (UniqueName: \"kubernetes.io/projected/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-kube-api-access-nn6j4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.938885 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.944081 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.944096 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:43 crc kubenswrapper[4817]: I0314 06:02:43.968993 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6j4\" (UniqueName: \"kubernetes.io/projected/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-kube-api-access-nn6j4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:44 crc kubenswrapper[4817]: I0314 06:02:44.016740 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:44 crc kubenswrapper[4817]: I0314 06:02:44.572579 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf"] Mar 14 06:02:45 crc kubenswrapper[4817]: I0314 06:02:45.581478 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" event={"ID":"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9","Type":"ContainerStarted","Data":"b39397c2cde7048076f3413f474939bd7ae0038c9086e5ce740c6f75c340c1e9"} Mar 14 06:02:46 crc kubenswrapper[4817]: I0314 06:02:46.592403 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" event={"ID":"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9","Type":"ContainerStarted","Data":"e73a24702cb71ef3d14e4039590d785e78ccb9bd82233b0e575f9a5021117c89"} Mar 14 06:02:46 crc kubenswrapper[4817]: I0314 06:02:46.616059 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" podStartSLOduration=2.494695489 podStartE2EDuration="3.616039476s" podCreationTimestamp="2026-03-14 06:02:43 +0000 UTC" firstStartedPulling="2026-03-14 06:02:44.566097342 +0000 UTC m=+1818.604358088" lastFinishedPulling="2026-03-14 06:02:45.687441319 +0000 UTC m=+1819.725702075" observedRunningTime="2026-03-14 06:02:46.61162388 +0000 UTC m=+1820.649884626" watchObservedRunningTime="2026-03-14 06:02:46.616039476 +0000 UTC m=+1820.654300242" Mar 14 06:02:49 crc kubenswrapper[4817]: I0314 06:02:49.732560 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:02:49 crc kubenswrapper[4817]: E0314 06:02:49.733498 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:02:50 crc kubenswrapper[4817]: I0314 06:02:50.650389 4817 generic.go:334] "Generic (PLEG): container finished" podID="f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" containerID="e73a24702cb71ef3d14e4039590d785e78ccb9bd82233b0e575f9a5021117c89" exitCode=0 Mar 14 06:02:50 crc kubenswrapper[4817]: I0314 06:02:50.650541 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" event={"ID":"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9","Type":"ContainerDied","Data":"e73a24702cb71ef3d14e4039590d785e78ccb9bd82233b0e575f9a5021117c89"} Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.120629 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.315707 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6j4\" (UniqueName: \"kubernetes.io/projected/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-kube-api-access-nn6j4\") pod \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.315861 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-ssh-key-openstack-edpm-ipam\") pod \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.315909 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-inventory\") pod \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\" (UID: \"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9\") " Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.333476 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-kube-api-access-nn6j4" (OuterVolumeSpecName: "kube-api-access-nn6j4") pod "f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" (UID: "f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9"). InnerVolumeSpecName "kube-api-access-nn6j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.343514 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" (UID: "f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.344807 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-inventory" (OuterVolumeSpecName: "inventory") pod "f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" (UID: "f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.418619 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6j4\" (UniqueName: \"kubernetes.io/projected/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-kube-api-access-nn6j4\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.418667 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.418681 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.668691 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" event={"ID":"f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9","Type":"ContainerDied","Data":"b39397c2cde7048076f3413f474939bd7ae0038c9086e5ce740c6f75c340c1e9"} Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.668738 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39397c2cde7048076f3413f474939bd7ae0038c9086e5ce740c6f75c340c1e9" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.668759 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.767619 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw"] Mar 14 06:02:52 crc kubenswrapper[4817]: E0314 06:02:52.768318 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.768351 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.768687 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.769604 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.773437 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.773581 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.773700 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.773837 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.775991 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw"] Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.930019 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8n4x\" (UniqueName: \"kubernetes.io/projected/bdedab8b-b434-46b2-be3e-e2fc9be5119e-kube-api-access-s8n4x\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.930748 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:52 crc kubenswrapper[4817]: I0314 06:02:52.930915 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.032230 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8n4x\" (UniqueName: \"kubernetes.io/projected/bdedab8b-b434-46b2-be3e-e2fc9be5119e-kube-api-access-s8n4x\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.032407 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.032523 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.037495 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.037722 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.051392 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8n4x\" (UniqueName: \"kubernetes.io/projected/bdedab8b-b434-46b2-be3e-e2fc9be5119e-kube-api-access-s8n4x\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fq2cw\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.104717 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:02:53 crc kubenswrapper[4817]: I0314 06:02:53.692222 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw"] Mar 14 06:02:54 crc kubenswrapper[4817]: I0314 06:02:54.692410 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" event={"ID":"bdedab8b-b434-46b2-be3e-e2fc9be5119e","Type":"ContainerStarted","Data":"0e3b31413bf75cba0efb8fe57badbf63fa7a138661e1b6656e0f5e4adafede6d"} Mar 14 06:02:54 crc kubenswrapper[4817]: I0314 06:02:54.692937 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" event={"ID":"bdedab8b-b434-46b2-be3e-e2fc9be5119e","Type":"ContainerStarted","Data":"32c82dff06f6c94423e705ae39e42936f828d48defd77300ce34546411727385"} Mar 14 06:02:54 crc kubenswrapper[4817]: I0314 06:02:54.724018 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" podStartSLOduration=2.301416219 podStartE2EDuration="2.723882538s" podCreationTimestamp="2026-03-14 06:02:52 +0000 UTC" firstStartedPulling="2026-03-14 06:02:53.693364042 +0000 UTC m=+1827.731624788" lastFinishedPulling="2026-03-14 06:02:54.115830361 +0000 UTC m=+1828.154091107" observedRunningTime="2026-03-14 06:02:54.715075786 +0000 UTC m=+1828.753336532" watchObservedRunningTime="2026-03-14 06:02:54.723882538 +0000 UTC m=+1828.762143284" Mar 14 06:03:02 crc kubenswrapper[4817]: I0314 06:03:02.732282 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:03:02 crc kubenswrapper[4817]: E0314 06:03:02.733309 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:03:13 crc kubenswrapper[4817]: I0314 06:03:13.732508 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:03:13 crc kubenswrapper[4817]: E0314 06:03:13.733862 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:03:23 crc kubenswrapper[4817]: I0314 06:03:23.052289 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bgn4z"] Mar 14 06:03:23 crc kubenswrapper[4817]: I0314 06:03:23.060968 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bgn4z"] Mar 14 06:03:24 crc kubenswrapper[4817]: I0314 06:03:24.752499 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efc2969-6d96-48a4-8fc1-108da2c8f778" path="/var/lib/kubelet/pods/8efc2969-6d96-48a4-8fc1-108da2c8f778/volumes" Mar 14 06:03:25 crc kubenswrapper[4817]: I0314 06:03:25.733341 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:03:25 crc kubenswrapper[4817]: E0314 06:03:25.734329 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:03:33 crc kubenswrapper[4817]: I0314 06:03:33.077630 4817 generic.go:334] "Generic (PLEG): container finished" podID="bdedab8b-b434-46b2-be3e-e2fc9be5119e" containerID="0e3b31413bf75cba0efb8fe57badbf63fa7a138661e1b6656e0f5e4adafede6d" exitCode=0 Mar 14 06:03:33 crc kubenswrapper[4817]: I0314 06:03:33.077732 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" event={"ID":"bdedab8b-b434-46b2-be3e-e2fc9be5119e","Type":"ContainerDied","Data":"0e3b31413bf75cba0efb8fe57badbf63fa7a138661e1b6656e0f5e4adafede6d"} Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.514764 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.628679 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8n4x\" (UniqueName: \"kubernetes.io/projected/bdedab8b-b434-46b2-be3e-e2fc9be5119e-kube-api-access-s8n4x\") pod \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.628845 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-inventory\") pod \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.629051 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-ssh-key-openstack-edpm-ipam\") pod \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\" (UID: \"bdedab8b-b434-46b2-be3e-e2fc9be5119e\") " Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.641575 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdedab8b-b434-46b2-be3e-e2fc9be5119e-kube-api-access-s8n4x" (OuterVolumeSpecName: "kube-api-access-s8n4x") pod "bdedab8b-b434-46b2-be3e-e2fc9be5119e" (UID: "bdedab8b-b434-46b2-be3e-e2fc9be5119e"). InnerVolumeSpecName "kube-api-access-s8n4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.662124 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdedab8b-b434-46b2-be3e-e2fc9be5119e" (UID: "bdedab8b-b434-46b2-be3e-e2fc9be5119e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.663043 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-inventory" (OuterVolumeSpecName: "inventory") pod "bdedab8b-b434-46b2-be3e-e2fc9be5119e" (UID: "bdedab8b-b434-46b2-be3e-e2fc9be5119e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.730979 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8n4x\" (UniqueName: \"kubernetes.io/projected/bdedab8b-b434-46b2-be3e-e2fc9be5119e-kube-api-access-s8n4x\") on node \"crc\" DevicePath \"\"" Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.731018 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:03:34 crc kubenswrapper[4817]: I0314 06:03:34.731029 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdedab8b-b434-46b2-be3e-e2fc9be5119e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.051881 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q8744"] Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.069048 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q8744"] Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.079499 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-znzjw"] Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.087264 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-znzjw"] Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.100421 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" event={"ID":"bdedab8b-b434-46b2-be3e-e2fc9be5119e","Type":"ContainerDied","Data":"32c82dff06f6c94423e705ae39e42936f828d48defd77300ce34546411727385"} Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.100465 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c82dff06f6c94423e705ae39e42936f828d48defd77300ce34546411727385" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.100544 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.205122 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59"] Mar 14 06:03:35 crc kubenswrapper[4817]: E0314 06:03:35.205597 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdedab8b-b434-46b2-be3e-e2fc9be5119e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.205619 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdedab8b-b434-46b2-be3e-e2fc9be5119e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.205810 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdedab8b-b434-46b2-be3e-e2fc9be5119e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.206439 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.209512 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.211296 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.213468 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.213620 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.240144 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59"] Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.344132 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pvsg\" (UniqueName: \"kubernetes.io/projected/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-kube-api-access-2pvsg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.344238 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.344783 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.447575 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.447763 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.447946 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pvsg\" (UniqueName: \"kubernetes.io/projected/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-kube-api-access-2pvsg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.453868 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.454978 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.467769 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pvsg\" (UniqueName: \"kubernetes.io/projected/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-kube-api-access-2pvsg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:35 crc kubenswrapper[4817]: I0314 06:03:35.536306 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:36 crc kubenswrapper[4817]: I0314 06:03:36.099561 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59"] Mar 14 06:03:36 crc kubenswrapper[4817]: I0314 06:03:36.744442 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59388585-c2da-4111-ad61-aacdf15612aa" path="/var/lib/kubelet/pods/59388585-c2da-4111-ad61-aacdf15612aa/volumes" Mar 14 06:03:36 crc kubenswrapper[4817]: I0314 06:03:36.747223 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f8ace2-60bf-4cb4-b473-a92c7860b5af" path="/var/lib/kubelet/pods/b1f8ace2-60bf-4cb4-b473-a92c7860b5af/volumes" Mar 14 06:03:37 crc kubenswrapper[4817]: I0314 06:03:37.120484 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" event={"ID":"05416e96-e39f-4dfb-abd6-d74c6fe66eb7","Type":"ContainerStarted","Data":"42858a84851a16ccec6a65212c0b5f2423a3cab6bd4999a8ad99a98c34c31b1a"} Mar 14 06:03:37 crc kubenswrapper[4817]: I0314 06:03:37.121028 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" event={"ID":"05416e96-e39f-4dfb-abd6-d74c6fe66eb7","Type":"ContainerStarted","Data":"a4ff08b7dec9a11830de2899e3307b718c98a1908c6b542b1faef86c31597849"} Mar 14 06:03:39 crc kubenswrapper[4817]: I0314 06:03:39.732720 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:03:39 crc kubenswrapper[4817]: E0314 06:03:39.733689 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:03:41 crc kubenswrapper[4817]: I0314 06:03:41.161482 4817 generic.go:334] "Generic (PLEG): container finished" podID="05416e96-e39f-4dfb-abd6-d74c6fe66eb7" containerID="42858a84851a16ccec6a65212c0b5f2423a3cab6bd4999a8ad99a98c34c31b1a" exitCode=0 Mar 14 06:03:41 crc kubenswrapper[4817]: I0314 06:03:41.161610 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" event={"ID":"05416e96-e39f-4dfb-abd6-d74c6fe66eb7","Type":"ContainerDied","Data":"42858a84851a16ccec6a65212c0b5f2423a3cab6bd4999a8ad99a98c34c31b1a"} Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.630581 4817 scope.go:117] "RemoveContainer" containerID="5f2efe95418943fd1d398d3d751fd55bbdc83ff0337222a34f5cfca8c98cd219" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.708833 4817 scope.go:117] "RemoveContainer" containerID="94af5f404095236199a2d3f6818c1dcd03cf3916b570b338cc950cd367b01657" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.723430 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.792974 4817 scope.go:117] "RemoveContainer" containerID="bd3c4dff18234cb6cf97fd9662d40682c8c72033c894a3c68cab09c778f9ff77" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.809163 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-ssh-key-openstack-edpm-ipam\") pod \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.809289 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-inventory\") pod \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.809400 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pvsg\" (UniqueName: \"kubernetes.io/projected/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-kube-api-access-2pvsg\") pod \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\" (UID: \"05416e96-e39f-4dfb-abd6-d74c6fe66eb7\") " Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.816693 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-kube-api-access-2pvsg" (OuterVolumeSpecName: "kube-api-access-2pvsg") pod "05416e96-e39f-4dfb-abd6-d74c6fe66eb7" (UID: "05416e96-e39f-4dfb-abd6-d74c6fe66eb7"). InnerVolumeSpecName "kube-api-access-2pvsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.837325 4817 scope.go:117] "RemoveContainer" containerID="d82ea0da8426f784499c3eeb032081c3c29cfb8534ea5d19f7aeec83515cc006" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.857064 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-inventory" (OuterVolumeSpecName: "inventory") pod "05416e96-e39f-4dfb-abd6-d74c6fe66eb7" (UID: "05416e96-e39f-4dfb-abd6-d74c6fe66eb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.859513 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05416e96-e39f-4dfb-abd6-d74c6fe66eb7" (UID: "05416e96-e39f-4dfb-abd6-d74c6fe66eb7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.912069 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pvsg\" (UniqueName: \"kubernetes.io/projected/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-kube-api-access-2pvsg\") on node \"crc\" DevicePath \"\"" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.912097 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:03:42 crc kubenswrapper[4817]: I0314 06:03:42.912108 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05416e96-e39f-4dfb-abd6-d74c6fe66eb7-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.190025 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" event={"ID":"05416e96-e39f-4dfb-abd6-d74c6fe66eb7","Type":"ContainerDied","Data":"a4ff08b7dec9a11830de2899e3307b718c98a1908c6b542b1faef86c31597849"} Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.190089 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ff08b7dec9a11830de2899e3307b718c98a1908c6b542b1faef86c31597849" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.190119 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.402560 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb"] Mar 14 06:03:43 crc kubenswrapper[4817]: E0314 06:03:43.403069 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05416e96-e39f-4dfb-abd6-d74c6fe66eb7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.403095 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="05416e96-e39f-4dfb-abd6-d74c6fe66eb7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.403319 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="05416e96-e39f-4dfb-abd6-d74c6fe66eb7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.404210 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.407199 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.407859 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.408326 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.408604 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.434329 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb"] Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.549294 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vph\" (UniqueName: \"kubernetes.io/projected/9b985670-a3bb-4996-9742-801968709eb8-kube-api-access-x8vph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.549462 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.549586 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.652193 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.652884 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.653025 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vph\" (UniqueName: \"kubernetes.io/projected/9b985670-a3bb-4996-9742-801968709eb8-kube-api-access-x8vph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.662568 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.662884 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.675468 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vph\" (UniqueName: \"kubernetes.io/projected/9b985670-a3bb-4996-9742-801968709eb8-kube-api-access-x8vph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:43 crc kubenswrapper[4817]: I0314 06:03:43.756396 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:03:44 crc kubenswrapper[4817]: I0314 06:03:44.308721 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb"] Mar 14 06:03:45 crc kubenswrapper[4817]: I0314 06:03:45.229420 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" event={"ID":"9b985670-a3bb-4996-9742-801968709eb8","Type":"ContainerStarted","Data":"55c21c2681875d9e52bb111ec8a48e9d83ff964d7d7e4faa459badb0ce7b27ed"} Mar 14 06:03:45 crc kubenswrapper[4817]: I0314 06:03:45.236799 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" event={"ID":"9b985670-a3bb-4996-9742-801968709eb8","Type":"ContainerStarted","Data":"fa85ca1783d2dee739a5d5234b0ae840423f32cb3b5134f2bbc3355e847e9c39"} Mar 14 06:03:45 crc kubenswrapper[4817]: I0314 06:03:45.266714 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" podStartSLOduration=1.707219382 podStartE2EDuration="2.266692862s" podCreationTimestamp="2026-03-14 06:03:43 +0000 UTC" firstStartedPulling="2026-03-14 06:03:44.312181016 +0000 UTC m=+1878.350441762" lastFinishedPulling="2026-03-14 06:03:44.871654496 +0000 UTC m=+1878.909915242" observedRunningTime="2026-03-14 06:03:45.260192227 +0000 UTC m=+1879.298452993" watchObservedRunningTime="2026-03-14 06:03:45.266692862 +0000 UTC m=+1879.304953608" Mar 14 06:03:46 crc kubenswrapper[4817]: I0314 06:03:46.068134 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fbf7q"] Mar 14 06:03:46 crc kubenswrapper[4817]: I0314 06:03:46.083389 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fbf7q"] Mar 14 06:03:46 crc kubenswrapper[4817]: I0314 06:03:46.745040 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d5019d-817c-4cc1-b73d-7e32a6cb97b3" path="/var/lib/kubelet/pods/c2d5019d-817c-4cc1-b73d-7e32a6cb97b3/volumes" Mar 14 06:03:47 crc kubenswrapper[4817]: I0314 06:03:47.042996 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-z7ltl"] Mar 14 06:03:47 crc kubenswrapper[4817]: I0314 06:03:47.052132 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-z7ltl"] Mar 14 06:03:48 crc kubenswrapper[4817]: I0314 06:03:48.750649 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e16259-a87f-4bb8-8fa1-5ee63129e195" path="/var/lib/kubelet/pods/a0e16259-a87f-4bb8-8fa1-5ee63129e195/volumes" Mar 14 06:03:53 crc kubenswrapper[4817]: I0314 06:03:53.732681 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:03:53 crc kubenswrapper[4817]: E0314 06:03:53.734133 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.155728 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557804-txgbg"] Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.159311 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.162613 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.163017 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.164504 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.170809 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557804-txgbg"] Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.249760 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxc6f\" (UniqueName: \"kubernetes.io/projected/eefc7b79-331d-4b08-977f-1b27e3f414eb-kube-api-access-vxc6f\") pod \"auto-csr-approver-29557804-txgbg\" (UID: \"eefc7b79-331d-4b08-977f-1b27e3f414eb\") " pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.351623 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxc6f\" (UniqueName: \"kubernetes.io/projected/eefc7b79-331d-4b08-977f-1b27e3f414eb-kube-api-access-vxc6f\") pod \"auto-csr-approver-29557804-txgbg\" (UID: \"eefc7b79-331d-4b08-977f-1b27e3f414eb\") " pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.375180 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxc6f\" (UniqueName: \"kubernetes.io/projected/eefc7b79-331d-4b08-977f-1b27e3f414eb-kube-api-access-vxc6f\") pod \"auto-csr-approver-29557804-txgbg\" (UID: \"eefc7b79-331d-4b08-977f-1b27e3f414eb\") " pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.495258 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:00 crc kubenswrapper[4817]: I0314 06:04:00.976732 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557804-txgbg"] Mar 14 06:04:00 crc kubenswrapper[4817]: W0314 06:04:00.982420 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeefc7b79_331d_4b08_977f_1b27e3f414eb.slice/crio-c0ee24bc6d97cf775629ab699bd6cdd16cb381c45dfc95ab130bc1abb782867b WatchSource:0}: Error finding container c0ee24bc6d97cf775629ab699bd6cdd16cb381c45dfc95ab130bc1abb782867b: Status 404 returned error can't find the container with id c0ee24bc6d97cf775629ab699bd6cdd16cb381c45dfc95ab130bc1abb782867b Mar 14 06:04:01 crc kubenswrapper[4817]: I0314 06:04:01.441479 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557804-txgbg" event={"ID":"eefc7b79-331d-4b08-977f-1b27e3f414eb","Type":"ContainerStarted","Data":"c0ee24bc6d97cf775629ab699bd6cdd16cb381c45dfc95ab130bc1abb782867b"} Mar 14 06:04:02 crc kubenswrapper[4817]: I0314 06:04:02.455299 4817 generic.go:334] "Generic (PLEG): container finished" podID="eefc7b79-331d-4b08-977f-1b27e3f414eb" containerID="93b989742fae0f4354f257e4eb8b257dfaff84b4f4ca6273236fdb675145ed85" exitCode=0 Mar 14 06:04:02 crc kubenswrapper[4817]: I0314 06:04:02.455538 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557804-txgbg" event={"ID":"eefc7b79-331d-4b08-977f-1b27e3f414eb","Type":"ContainerDied","Data":"93b989742fae0f4354f257e4eb8b257dfaff84b4f4ca6273236fdb675145ed85"} Mar 14 06:04:03 crc kubenswrapper[4817]: I0314 06:04:03.814378 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:03 crc kubenswrapper[4817]: I0314 06:04:03.932773 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxc6f\" (UniqueName: \"kubernetes.io/projected/eefc7b79-331d-4b08-977f-1b27e3f414eb-kube-api-access-vxc6f\") pod \"eefc7b79-331d-4b08-977f-1b27e3f414eb\" (UID: \"eefc7b79-331d-4b08-977f-1b27e3f414eb\") " Mar 14 06:04:03 crc kubenswrapper[4817]: I0314 06:04:03.938214 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefc7b79-331d-4b08-977f-1b27e3f414eb-kube-api-access-vxc6f" (OuterVolumeSpecName: "kube-api-access-vxc6f") pod "eefc7b79-331d-4b08-977f-1b27e3f414eb" (UID: "eefc7b79-331d-4b08-977f-1b27e3f414eb"). InnerVolumeSpecName "kube-api-access-vxc6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:04:04 crc kubenswrapper[4817]: I0314 06:04:04.035231 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxc6f\" (UniqueName: \"kubernetes.io/projected/eefc7b79-331d-4b08-977f-1b27e3f414eb-kube-api-access-vxc6f\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:04 crc kubenswrapper[4817]: I0314 06:04:04.479692 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557804-txgbg" event={"ID":"eefc7b79-331d-4b08-977f-1b27e3f414eb","Type":"ContainerDied","Data":"c0ee24bc6d97cf775629ab699bd6cdd16cb381c45dfc95ab130bc1abb782867b"} Mar 14 06:04:04 crc kubenswrapper[4817]: I0314 06:04:04.479738 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ee24bc6d97cf775629ab699bd6cdd16cb381c45dfc95ab130bc1abb782867b" Mar 14 06:04:04 crc kubenswrapper[4817]: I0314 06:04:04.479803 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557804-txgbg" Mar 14 06:04:04 crc kubenswrapper[4817]: I0314 06:04:04.882203 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557798-f5p6d"] Mar 14 06:04:04 crc kubenswrapper[4817]: I0314 06:04:04.893336 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557798-f5p6d"] Mar 14 06:04:06 crc kubenswrapper[4817]: I0314 06:04:06.747193 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:04:06 crc kubenswrapper[4817]: E0314 06:04:06.748283 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:04:06 crc kubenswrapper[4817]: I0314 06:04:06.751955 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba77f52-4903-4483-a9f1-86e1a42cd513" path="/var/lib/kubelet/pods/cba77f52-4903-4483-a9f1-86e1a42cd513/volumes" Mar 14 06:04:18 crc kubenswrapper[4817]: I0314 06:04:18.732078 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:04:18 crc kubenswrapper[4817]: E0314 06:04:18.733311 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:04:30 crc kubenswrapper[4817]: I0314 06:04:30.732564 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:04:30 crc kubenswrapper[4817]: E0314 06:04:30.734132 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:04:36 crc kubenswrapper[4817]: I0314 06:04:36.048101 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mmkrb"] Mar 14 06:04:36 crc kubenswrapper[4817]: I0314 06:04:36.061530 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mmkrb"] Mar 14 06:04:36 crc kubenswrapper[4817]: I0314 06:04:36.741682 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d12d202-daac-4eb2-a09a-5c3a63251b85" path="/var/lib/kubelet/pods/3d12d202-daac-4eb2-a09a-5c3a63251b85/volumes" Mar 14 06:04:36 crc kubenswrapper[4817]: I0314 06:04:36.842779 4817 generic.go:334] "Generic (PLEG): container finished" podID="9b985670-a3bb-4996-9742-801968709eb8" containerID="55c21c2681875d9e52bb111ec8a48e9d83ff964d7d7e4faa459badb0ce7b27ed" exitCode=0 Mar 14 06:04:36 crc kubenswrapper[4817]: I0314 06:04:36.842832 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" event={"ID":"9b985670-a3bb-4996-9742-801968709eb8","Type":"ContainerDied","Data":"55c21c2681875d9e52bb111ec8a48e9d83ff964d7d7e4faa459badb0ce7b27ed"} Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.036235 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1171-account-create-update-fgfss"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.046431 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1e84-account-create-update-nr95f"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.057533 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c4bd4"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.066395 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mtn8w"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.073284 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1e84-account-create-update-nr95f"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.079526 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1171-account-create-update-fgfss"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.086413 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mtn8w"] Mar 14 06:04:37 crc kubenswrapper[4817]: I0314 06:04:37.092619 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c4bd4"] Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.051227 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-efad-account-create-update-mccw2"] Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.061549 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-efad-account-create-update-mccw2"] Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.278035 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.477656 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8vph\" (UniqueName: \"kubernetes.io/projected/9b985670-a3bb-4996-9742-801968709eb8-kube-api-access-x8vph\") pod \"9b985670-a3bb-4996-9742-801968709eb8\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.477736 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-inventory\") pod \"9b985670-a3bb-4996-9742-801968709eb8\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.477798 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-ssh-key-openstack-edpm-ipam\") pod \"9b985670-a3bb-4996-9742-801968709eb8\" (UID: \"9b985670-a3bb-4996-9742-801968709eb8\") " Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.486820 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b985670-a3bb-4996-9742-801968709eb8-kube-api-access-x8vph" (OuterVolumeSpecName: "kube-api-access-x8vph") pod "9b985670-a3bb-4996-9742-801968709eb8" (UID: "9b985670-a3bb-4996-9742-801968709eb8"). InnerVolumeSpecName "kube-api-access-x8vph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.530569 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b985670-a3bb-4996-9742-801968709eb8" (UID: "9b985670-a3bb-4996-9742-801968709eb8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.532742 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-inventory" (OuterVolumeSpecName: "inventory") pod "9b985670-a3bb-4996-9742-801968709eb8" (UID: "9b985670-a3bb-4996-9742-801968709eb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.582163 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.582210 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8vph\" (UniqueName: \"kubernetes.io/projected/9b985670-a3bb-4996-9742-801968709eb8-kube-api-access-x8vph\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.582224 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b985670-a3bb-4996-9742-801968709eb8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.745173 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="689fbc67-9915-470c-a459-7f1787c26534" path="/var/lib/kubelet/pods/689fbc67-9915-470c-a459-7f1787c26534/volumes" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.746015 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a140f8f-9be0-40ee-a952-2b2ef0d67031" path="/var/lib/kubelet/pods/7a140f8f-9be0-40ee-a952-2b2ef0d67031/volumes" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.746811 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b608e883-8038-424c-aa1a-d5a7b23ba0bf" path="/var/lib/kubelet/pods/b608e883-8038-424c-aa1a-d5a7b23ba0bf/volumes" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.747609 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc6cb376-b4a6-449c-9155-4da3eaa5a94b" path="/var/lib/kubelet/pods/fc6cb376-b4a6-449c-9155-4da3eaa5a94b/volumes" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.749209 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf4c48c-91c4-4d85-938e-ec70263d466e" path="/var/lib/kubelet/pods/fdf4c48c-91c4-4d85-938e-ec70263d466e/volumes" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.861882 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" event={"ID":"9b985670-a3bb-4996-9742-801968709eb8","Type":"ContainerDied","Data":"fa85ca1783d2dee739a5d5234b0ae840423f32cb3b5134f2bbc3355e847e9c39"} Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.861936 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa85ca1783d2dee739a5d5234b0ae840423f32cb3b5134f2bbc3355e847e9c39" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.862021 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.949391 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dkgx2"] Mar 14 06:04:38 crc kubenswrapper[4817]: E0314 06:04:38.949834 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b985670-a3bb-4996-9742-801968709eb8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.949853 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b985670-a3bb-4996-9742-801968709eb8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:04:38 crc kubenswrapper[4817]: E0314 06:04:38.949883 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefc7b79-331d-4b08-977f-1b27e3f414eb" containerName="oc" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.949889 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefc7b79-331d-4b08-977f-1b27e3f414eb" containerName="oc" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.951440 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b985670-a3bb-4996-9742-801968709eb8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.951465 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefc7b79-331d-4b08-977f-1b27e3f414eb" containerName="oc" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.952145 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.954804 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.955150 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.955419 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.957904 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:04:38 crc kubenswrapper[4817]: I0314 06:04:38.971739 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dkgx2"] Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.090715 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.090821 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.090904 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7x5x\" (UniqueName: \"kubernetes.io/projected/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-kube-api-access-r7x5x\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.192710 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.192823 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.192911 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7x5x\" (UniqueName: \"kubernetes.io/projected/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-kube-api-access-r7x5x\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.208686 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.208755 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.212872 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7x5x\" (UniqueName: \"kubernetes.io/projected/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-kube-api-access-r7x5x\") pod \"ssh-known-hosts-edpm-deployment-dkgx2\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.275073 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.841156 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dkgx2"] Mar 14 06:04:39 crc kubenswrapper[4817]: I0314 06:04:39.872005 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" event={"ID":"0fc1865c-4df2-4c2d-a470-a2c8d02dec60","Type":"ContainerStarted","Data":"f1a504778a77e798201277188399af91506c7f5ad1f77ff08760f78902e2b414"} Mar 14 06:04:40 crc kubenswrapper[4817]: I0314 06:04:40.885323 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" event={"ID":"0fc1865c-4df2-4c2d-a470-a2c8d02dec60","Type":"ContainerStarted","Data":"42e37c64fab633d0842648d0a8d35d9d86004550e1c2e89a77ed1fa2b731dba6"} Mar 14 06:04:40 crc kubenswrapper[4817]: I0314 06:04:40.906517 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" podStartSLOduration=2.474663532 podStartE2EDuration="2.906486637s" podCreationTimestamp="2026-03-14 06:04:38 +0000 UTC" firstStartedPulling="2026-03-14 06:04:39.848258191 +0000 UTC m=+1933.886518937" lastFinishedPulling="2026-03-14 06:04:40.280081286 +0000 UTC m=+1934.318342042" observedRunningTime="2026-03-14 06:04:40.90519175 +0000 UTC m=+1934.943452546" watchObservedRunningTime="2026-03-14 06:04:40.906486637 +0000 UTC m=+1934.944747423" Mar 14 06:04:42 crc kubenswrapper[4817]: I0314 06:04:42.731974 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:04:42 crc kubenswrapper[4817]: E0314 06:04:42.732362 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:04:42 crc kubenswrapper[4817]: I0314 06:04:42.958680 4817 scope.go:117] "RemoveContainer" containerID="e2230960fc959ff8b0ba1e5cdf2736f3d66792ea0a205510deda2bd1dfc13e8a" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.013565 4817 scope.go:117] "RemoveContainer" containerID="3c059d530572839117be6544d40119b5f20665db829591c8224714c628577580" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.033947 4817 scope.go:117] "RemoveContainer" containerID="455c57098b48d646a39bc30ff18c513d9a2aae6123587538e8e8ddf5640ef9ba" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.103491 4817 scope.go:117] "RemoveContainer" containerID="e7190b27429feb1422fb9099e30e1e455c9f1050911818f5548bc155ed7ea5e1" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.130031 4817 scope.go:117] "RemoveContainer" containerID="bbe42d57d935818f316d61b98ac093be210893b0c901fafc039d564e1e065f6d" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.197271 4817 scope.go:117] "RemoveContainer" containerID="d5ddaa27474aba842f32ff6cb098dcd2eef54bd56a290f34ea7c4736cc8d786e" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.219883 4817 scope.go:117] "RemoveContainer" containerID="bc80be5572b8024d32599245106517f8839b626aba33d9745d3e3d0fe0f9030e" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.245750 4817 scope.go:117] "RemoveContainer" containerID="ebcf52beebdc4d6f7fa5d7647ece670753dd5613bb28788ff24cbfa0d42cedfb" Mar 14 06:04:43 crc kubenswrapper[4817]: I0314 06:04:43.267304 4817 scope.go:117] "RemoveContainer" containerID="d509e85524eb80838eb94f1c5cdc7dd9266adda5cda277f61528a27d934195d5" Mar 14 06:04:47 crc kubenswrapper[4817]: I0314 06:04:47.971091 4817 generic.go:334] "Generic (PLEG): container finished" podID="0fc1865c-4df2-4c2d-a470-a2c8d02dec60" containerID="42e37c64fab633d0842648d0a8d35d9d86004550e1c2e89a77ed1fa2b731dba6" exitCode=0 Mar 14 06:04:47 crc kubenswrapper[4817]: I0314 06:04:47.972012 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" event={"ID":"0fc1865c-4df2-4c2d-a470-a2c8d02dec60","Type":"ContainerDied","Data":"42e37c64fab633d0842648d0a8d35d9d86004550e1c2e89a77ed1fa2b731dba6"} Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.377516 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.518778 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-inventory-0\") pod \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.518975 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-ssh-key-openstack-edpm-ipam\") pod \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.520001 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7x5x\" (UniqueName: \"kubernetes.io/projected/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-kube-api-access-r7x5x\") pod \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\" (UID: \"0fc1865c-4df2-4c2d-a470-a2c8d02dec60\") " Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.528138 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-kube-api-access-r7x5x" (OuterVolumeSpecName: "kube-api-access-r7x5x") pod "0fc1865c-4df2-4c2d-a470-a2c8d02dec60" (UID: "0fc1865c-4df2-4c2d-a470-a2c8d02dec60"). InnerVolumeSpecName "kube-api-access-r7x5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.553047 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0fc1865c-4df2-4c2d-a470-a2c8d02dec60" (UID: "0fc1865c-4df2-4c2d-a470-a2c8d02dec60"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.560161 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0fc1865c-4df2-4c2d-a470-a2c8d02dec60" (UID: "0fc1865c-4df2-4c2d-a470-a2c8d02dec60"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.622726 4817 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.622776 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.622793 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7x5x\" (UniqueName: \"kubernetes.io/projected/0fc1865c-4df2-4c2d-a470-a2c8d02dec60-kube-api-access-r7x5x\") on node \"crc\" DevicePath \"\"" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.996219 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" event={"ID":"0fc1865c-4df2-4c2d-a470-a2c8d02dec60","Type":"ContainerDied","Data":"f1a504778a77e798201277188399af91506c7f5ad1f77ff08760f78902e2b414"} Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.996284 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a504778a77e798201277188399af91506c7f5ad1f77ff08760f78902e2b414" Mar 14 06:04:49 crc kubenswrapper[4817]: I0314 06:04:49.996338 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dkgx2" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.104499 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7"] Mar 14 06:04:50 crc kubenswrapper[4817]: E0314 06:04:50.104962 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc1865c-4df2-4c2d-a470-a2c8d02dec60" containerName="ssh-known-hosts-edpm-deployment" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.104980 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc1865c-4df2-4c2d-a470-a2c8d02dec60" containerName="ssh-known-hosts-edpm-deployment" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.105190 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc1865c-4df2-4c2d-a470-a2c8d02dec60" containerName="ssh-known-hosts-edpm-deployment" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.109309 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.114462 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.115337 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.115448 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.115644 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.131083 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7"] Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.239678 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr88h\" (UniqueName: \"kubernetes.io/projected/9960c053-42b4-4c33-abfb-ec56901d0f02-kube-api-access-sr88h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.240205 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.240291 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.342115 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr88h\" (UniqueName: \"kubernetes.io/projected/9960c053-42b4-4c33-abfb-ec56901d0f02-kube-api-access-sr88h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.342243 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.342316 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.350342 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.354527 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.360655 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr88h\" (UniqueName: \"kubernetes.io/projected/9960c053-42b4-4c33-abfb-ec56901d0f02-kube-api-access-sr88h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ncsx7\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.435647 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:04:50 crc kubenswrapper[4817]: I0314 06:04:50.978372 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7"] Mar 14 06:04:51 crc kubenswrapper[4817]: I0314 06:04:51.005838 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" event={"ID":"9960c053-42b4-4c33-abfb-ec56901d0f02","Type":"ContainerStarted","Data":"8862d41ef206cecc5240b8b4702f4305f82bfb3cffacf24ef9aa8857826df6a0"} Mar 14 06:04:52 crc kubenswrapper[4817]: I0314 06:04:52.024532 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" event={"ID":"9960c053-42b4-4c33-abfb-ec56901d0f02","Type":"ContainerStarted","Data":"a2564c2893a8026d3bc3312d9f73cd2b91bb3f013b15cc31cbfa8bbd17fb18f5"} Mar 14 06:04:52 crc kubenswrapper[4817]: I0314 06:04:52.064276 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" podStartSLOduration=1.610936216 podStartE2EDuration="2.064253476s" podCreationTimestamp="2026-03-14 06:04:50 +0000 UTC" firstStartedPulling="2026-03-14 06:04:50.986105511 +0000 UTC m=+1945.024366257" lastFinishedPulling="2026-03-14 06:04:51.439422771 +0000 UTC m=+1945.477683517" observedRunningTime="2026-03-14 06:04:52.052584523 +0000 UTC m=+1946.090845299" watchObservedRunningTime="2026-03-14 06:04:52.064253476 +0000 UTC m=+1946.102514242" Mar 14 06:04:56 crc kubenswrapper[4817]: I0314 06:04:56.741339 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:04:56 crc kubenswrapper[4817]: E0314 06:04:56.742233 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:04:59 crc kubenswrapper[4817]: I0314 06:04:59.088479 4817 generic.go:334] "Generic (PLEG): container finished" podID="9960c053-42b4-4c33-abfb-ec56901d0f02" containerID="a2564c2893a8026d3bc3312d9f73cd2b91bb3f013b15cc31cbfa8bbd17fb18f5" exitCode=0 Mar 14 06:04:59 crc kubenswrapper[4817]: I0314 06:04:59.088574 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" event={"ID":"9960c053-42b4-4c33-abfb-ec56901d0f02","Type":"ContainerDied","Data":"a2564c2893a8026d3bc3312d9f73cd2b91bb3f013b15cc31cbfa8bbd17fb18f5"} Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.532610 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.663030 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-inventory\") pod \"9960c053-42b4-4c33-abfb-ec56901d0f02\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.663192 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr88h\" (UniqueName: \"kubernetes.io/projected/9960c053-42b4-4c33-abfb-ec56901d0f02-kube-api-access-sr88h\") pod \"9960c053-42b4-4c33-abfb-ec56901d0f02\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.663849 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-ssh-key-openstack-edpm-ipam\") pod \"9960c053-42b4-4c33-abfb-ec56901d0f02\" (UID: \"9960c053-42b4-4c33-abfb-ec56901d0f02\") " Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.674196 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9960c053-42b4-4c33-abfb-ec56901d0f02-kube-api-access-sr88h" (OuterVolumeSpecName: "kube-api-access-sr88h") pod "9960c053-42b4-4c33-abfb-ec56901d0f02" (UID: "9960c053-42b4-4c33-abfb-ec56901d0f02"). InnerVolumeSpecName "kube-api-access-sr88h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.697776 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-inventory" (OuterVolumeSpecName: "inventory") pod "9960c053-42b4-4c33-abfb-ec56901d0f02" (UID: "9960c053-42b4-4c33-abfb-ec56901d0f02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.700641 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9960c053-42b4-4c33-abfb-ec56901d0f02" (UID: "9960c053-42b4-4c33-abfb-ec56901d0f02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.765944 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr88h\" (UniqueName: \"kubernetes.io/projected/9960c053-42b4-4c33-abfb-ec56901d0f02-kube-api-access-sr88h\") on node \"crc\" DevicePath \"\"" Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.766141 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:05:00 crc kubenswrapper[4817]: I0314 06:05:00.766214 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9960c053-42b4-4c33-abfb-ec56901d0f02-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.108811 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" event={"ID":"9960c053-42b4-4c33-abfb-ec56901d0f02","Type":"ContainerDied","Data":"8862d41ef206cecc5240b8b4702f4305f82bfb3cffacf24ef9aa8857826df6a0"} Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.109228 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8862d41ef206cecc5240b8b4702f4305f82bfb3cffacf24ef9aa8857826df6a0" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.109093 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.198720 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr"] Mar 14 06:05:01 crc kubenswrapper[4817]: E0314 06:05:01.199220 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9960c053-42b4-4c33-abfb-ec56901d0f02" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.199245 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9960c053-42b4-4c33-abfb-ec56901d0f02" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.199490 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9960c053-42b4-4c33-abfb-ec56901d0f02" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.200219 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.202298 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.204427 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.204493 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.204797 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.216366 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr"] Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.378601 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.378663 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6j8\" (UniqueName: \"kubernetes.io/projected/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-kube-api-access-pp6j8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.378769 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.480560 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.480657 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6j8\" (UniqueName: \"kubernetes.io/projected/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-kube-api-access-pp6j8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.480735 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.488262 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.489946 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.503798 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6j8\" (UniqueName: \"kubernetes.io/projected/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-kube-api-access-pp6j8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:01 crc kubenswrapper[4817]: I0314 06:05:01.518742 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:02 crc kubenswrapper[4817]: I0314 06:05:02.105374 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr"] Mar 14 06:05:02 crc kubenswrapper[4817]: I0314 06:05:02.117354 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" event={"ID":"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53","Type":"ContainerStarted","Data":"a1fe47eccc8eb5e6794684cd1981ca51a32b6a528104dd1fe2aea872c4823bc7"} Mar 14 06:05:04 crc kubenswrapper[4817]: I0314 06:05:04.136843 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" event={"ID":"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53","Type":"ContainerStarted","Data":"cb4d04b7c8af655ebcade33ba926ed572a30db2ebe145cd268faec4f1b2be5b9"} Mar 14 06:05:04 crc kubenswrapper[4817]: I0314 06:05:04.165889 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" podStartSLOduration=2.42744059 podStartE2EDuration="3.165866168s" podCreationTimestamp="2026-03-14 06:05:01 +0000 UTC" firstStartedPulling="2026-03-14 06:05:02.098462865 +0000 UTC m=+1956.136723611" lastFinishedPulling="2026-03-14 06:05:02.836888443 +0000 UTC m=+1956.875149189" observedRunningTime="2026-03-14 06:05:04.152525247 +0000 UTC m=+1958.190786023" watchObservedRunningTime="2026-03-14 06:05:04.165866168 +0000 UTC m=+1958.204126924" Mar 14 06:05:07 crc kubenswrapper[4817]: I0314 06:05:07.732584 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:05:07 crc kubenswrapper[4817]: E0314 06:05:07.733276 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:05:13 crc kubenswrapper[4817]: I0314 06:05:13.230962 4817 generic.go:334] "Generic (PLEG): container finished" podID="2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" containerID="cb4d04b7c8af655ebcade33ba926ed572a30db2ebe145cd268faec4f1b2be5b9" exitCode=0 Mar 14 06:05:13 crc kubenswrapper[4817]: I0314 06:05:13.231071 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" event={"ID":"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53","Type":"ContainerDied","Data":"cb4d04b7c8af655ebcade33ba926ed572a30db2ebe145cd268faec4f1b2be5b9"} Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.760728 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.869596 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp6j8\" (UniqueName: \"kubernetes.io/projected/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-kube-api-access-pp6j8\") pod \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.870226 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-inventory\") pod \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.870313 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-ssh-key-openstack-edpm-ipam\") pod \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\" (UID: \"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53\") " Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.879811 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-kube-api-access-pp6j8" (OuterVolumeSpecName: "kube-api-access-pp6j8") pod "2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" (UID: "2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53"). InnerVolumeSpecName "kube-api-access-pp6j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.918672 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-inventory" (OuterVolumeSpecName: "inventory") pod "2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" (UID: "2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.924030 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" (UID: "2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.974443 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp6j8\" (UniqueName: \"kubernetes.io/projected/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-kube-api-access-pp6j8\") on node \"crc\" DevicePath \"\"" Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.974538 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:05:14 crc kubenswrapper[4817]: I0314 06:05:14.974563 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:05:15 crc kubenswrapper[4817]: I0314 06:05:15.252845 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" event={"ID":"2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53","Type":"ContainerDied","Data":"a1fe47eccc8eb5e6794684cd1981ca51a32b6a528104dd1fe2aea872c4823bc7"} Mar 14 06:05:15 crc kubenswrapper[4817]: I0314 06:05:15.252933 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1fe47eccc8eb5e6794684cd1981ca51a32b6a528104dd1fe2aea872c4823bc7" Mar 14 06:05:15 crc kubenswrapper[4817]: I0314 06:05:15.252948 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr" Mar 14 06:05:18 crc kubenswrapper[4817]: I0314 06:05:18.732338 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:05:18 crc kubenswrapper[4817]: E0314 06:05:18.733430 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:05:29 crc kubenswrapper[4817]: I0314 06:05:29.732770 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:05:29 crc kubenswrapper[4817]: E0314 06:05:29.734456 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:05:43 crc kubenswrapper[4817]: I0314 06:05:43.731784 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:05:44 crc kubenswrapper[4817]: I0314 06:05:44.561606 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"3dab7efbef397f9b73d3e4fc38d0c4c45ca0a8ca618353f78100fd0231d68820"} Mar 14 06:05:46 crc kubenswrapper[4817]: I0314 06:05:46.045835 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hkrfb"] Mar 14 06:05:46 crc kubenswrapper[4817]: I0314 06:05:46.057350 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hkrfb"] Mar 14 06:05:46 crc kubenswrapper[4817]: I0314 06:05:46.751386 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db362f9-2d12-48c2-b94c-e1406a811e1e" path="/var/lib/kubelet/pods/0db362f9-2d12-48c2-b94c-e1406a811e1e/volumes" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.547803 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xsclx"] Mar 14 06:05:57 crc kubenswrapper[4817]: E0314 06:05:57.549313 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.549335 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.549563 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.551433 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.566266 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsclx"] Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.647167 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g22qw\" (UniqueName: \"kubernetes.io/projected/78755665-f7cc-4b7c-830d-153fc21a70df-kube-api-access-g22qw\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.647572 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-utilities\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.647653 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-catalog-content\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.749856 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g22qw\" (UniqueName: \"kubernetes.io/projected/78755665-f7cc-4b7c-830d-153fc21a70df-kube-api-access-g22qw\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.750011 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-utilities\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.750094 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-catalog-content\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.750568 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-catalog-content\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.750602 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-utilities\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.774015 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g22qw\" (UniqueName: \"kubernetes.io/projected/78755665-f7cc-4b7c-830d-153fc21a70df-kube-api-access-g22qw\") pod \"redhat-operators-xsclx\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:57 crc kubenswrapper[4817]: I0314 06:05:57.882047 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:05:58 crc kubenswrapper[4817]: I0314 06:05:58.345827 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xsclx"] Mar 14 06:05:58 crc kubenswrapper[4817]: I0314 06:05:58.706380 4817 generic.go:334] "Generic (PLEG): container finished" podID="78755665-f7cc-4b7c-830d-153fc21a70df" containerID="94e93f059afb4de14b5438e38f84ed83a1bf51f17c441c41f66846a074a9fcc0" exitCode=0 Mar 14 06:05:58 crc kubenswrapper[4817]: I0314 06:05:58.708699 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsclx" event={"ID":"78755665-f7cc-4b7c-830d-153fc21a70df","Type":"ContainerDied","Data":"94e93f059afb4de14b5438e38f84ed83a1bf51f17c441c41f66846a074a9fcc0"} Mar 14 06:05:58 crc kubenswrapper[4817]: I0314 06:05:58.708917 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsclx" event={"ID":"78755665-f7cc-4b7c-830d-153fc21a70df","Type":"ContainerStarted","Data":"6acca1318a7f64e5ea24937a2153a30f4bc863fea5a9cae4edef47412377d250"} Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.159084 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557806-fxjzp"] Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.161017 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.164154 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.164436 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.164638 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.171040 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557806-fxjzp"] Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.202904 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8gx\" (UniqueName: \"kubernetes.io/projected/16d307f5-0c34-4716-87f8-78cadbe917ca-kube-api-access-qt8gx\") pod \"auto-csr-approver-29557806-fxjzp\" (UID: \"16d307f5-0c34-4716-87f8-78cadbe917ca\") " pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.305015 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt8gx\" (UniqueName: \"kubernetes.io/projected/16d307f5-0c34-4716-87f8-78cadbe917ca-kube-api-access-qt8gx\") pod \"auto-csr-approver-29557806-fxjzp\" (UID: \"16d307f5-0c34-4716-87f8-78cadbe917ca\") " pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.327293 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt8gx\" (UniqueName: \"kubernetes.io/projected/16d307f5-0c34-4716-87f8-78cadbe917ca-kube-api-access-qt8gx\") pod \"auto-csr-approver-29557806-fxjzp\" (UID: \"16d307f5-0c34-4716-87f8-78cadbe917ca\") " pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.488692 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.731441 4817 generic.go:334] "Generic (PLEG): container finished" podID="78755665-f7cc-4b7c-830d-153fc21a70df" containerID="d1b991b39c1bef4d7a850686b8c9520222a38a6071fd2dd2e11fa19f0b894eb7" exitCode=0 Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.748456 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsclx" event={"ID":"78755665-f7cc-4b7c-830d-153fc21a70df","Type":"ContainerDied","Data":"d1b991b39c1bef4d7a850686b8c9520222a38a6071fd2dd2e11fa19f0b894eb7"} Mar 14 06:06:00 crc kubenswrapper[4817]: I0314 06:06:00.970382 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557806-fxjzp"] Mar 14 06:06:01 crc kubenswrapper[4817]: I0314 06:06:01.743514 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" event={"ID":"16d307f5-0c34-4716-87f8-78cadbe917ca","Type":"ContainerStarted","Data":"6447275f4e3fee570579d2d3f2dd816aacb5f807762e18e41bc1477d174d6a56"} Mar 14 06:06:01 crc kubenswrapper[4817]: I0314 06:06:01.747808 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsclx" event={"ID":"78755665-f7cc-4b7c-830d-153fc21a70df","Type":"ContainerStarted","Data":"0d06e17939778cc06d8c7ca6d4120b64cb0d1d5183fd40a650b23d3c4ff1e04a"} Mar 14 06:06:01 crc kubenswrapper[4817]: I0314 06:06:01.769073 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xsclx" podStartSLOduration=2.248095826 podStartE2EDuration="4.769050584s" podCreationTimestamp="2026-03-14 06:05:57 +0000 UTC" firstStartedPulling="2026-03-14 06:05:58.713138236 +0000 UTC m=+2012.751399002" lastFinishedPulling="2026-03-14 06:06:01.234093004 +0000 UTC m=+2015.272353760" observedRunningTime="2026-03-14 06:06:01.767710376 +0000 UTC m=+2015.805971142" watchObservedRunningTime="2026-03-14 06:06:01.769050584 +0000 UTC m=+2015.807311330" Mar 14 06:06:02 crc kubenswrapper[4817]: I0314 06:06:02.772862 4817 generic.go:334] "Generic (PLEG): container finished" podID="16d307f5-0c34-4716-87f8-78cadbe917ca" containerID="90eb4f023096139ea1173e33b952181702b6099b4941f8e637a2a526ce772315" exitCode=0 Mar 14 06:06:02 crc kubenswrapper[4817]: I0314 06:06:02.773998 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" event={"ID":"16d307f5-0c34-4716-87f8-78cadbe917ca","Type":"ContainerDied","Data":"90eb4f023096139ea1173e33b952181702b6099b4941f8e637a2a526ce772315"} Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.142678 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.192054 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt8gx\" (UniqueName: \"kubernetes.io/projected/16d307f5-0c34-4716-87f8-78cadbe917ca-kube-api-access-qt8gx\") pod \"16d307f5-0c34-4716-87f8-78cadbe917ca\" (UID: \"16d307f5-0c34-4716-87f8-78cadbe917ca\") " Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.214188 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d307f5-0c34-4716-87f8-78cadbe917ca-kube-api-access-qt8gx" (OuterVolumeSpecName: "kube-api-access-qt8gx") pod "16d307f5-0c34-4716-87f8-78cadbe917ca" (UID: "16d307f5-0c34-4716-87f8-78cadbe917ca"). InnerVolumeSpecName "kube-api-access-qt8gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.294648 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt8gx\" (UniqueName: \"kubernetes.io/projected/16d307f5-0c34-4716-87f8-78cadbe917ca-kube-api-access-qt8gx\") on node \"crc\" DevicePath \"\"" Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.791862 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" event={"ID":"16d307f5-0c34-4716-87f8-78cadbe917ca","Type":"ContainerDied","Data":"6447275f4e3fee570579d2d3f2dd816aacb5f807762e18e41bc1477d174d6a56"} Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.792306 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6447275f4e3fee570579d2d3f2dd816aacb5f807762e18e41bc1477d174d6a56" Mar 14 06:06:04 crc kubenswrapper[4817]: I0314 06:06:04.792161 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557806-fxjzp" Mar 14 06:06:05 crc kubenswrapper[4817]: I0314 06:06:05.223671 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557800-ztdlq"] Mar 14 06:06:05 crc kubenswrapper[4817]: I0314 06:06:05.231636 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557800-ztdlq"] Mar 14 06:06:06 crc kubenswrapper[4817]: I0314 06:06:06.744568 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec34242c-dc4b-4feb-ae3b-16d3b01c48ad" path="/var/lib/kubelet/pods/ec34242c-dc4b-4feb-ae3b-16d3b01c48ad/volumes" Mar 14 06:06:07 crc kubenswrapper[4817]: I0314 06:06:07.032590 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7kps9"] Mar 14 06:06:07 crc kubenswrapper[4817]: I0314 06:06:07.041678 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7kps9"] Mar 14 06:06:07 crc kubenswrapper[4817]: I0314 06:06:07.882195 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:06:07 crc kubenswrapper[4817]: I0314 06:06:07.882574 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:06:07 crc kubenswrapper[4817]: I0314 06:06:07.943641 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:06:08 crc kubenswrapper[4817]: I0314 06:06:08.745827 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723c4456-1e70-4425-b722-f3c68ae344b4" path="/var/lib/kubelet/pods/723c4456-1e70-4425-b722-f3c68ae344b4/volumes" Mar 14 06:06:08 crc kubenswrapper[4817]: I0314 06:06:08.885158 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:06:08 crc kubenswrapper[4817]: I0314 06:06:08.942550 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsclx"] Mar 14 06:06:10 crc kubenswrapper[4817]: I0314 06:06:10.853432 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xsclx" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="registry-server" containerID="cri-o://0d06e17939778cc06d8c7ca6d4120b64cb0d1d5183fd40a650b23d3c4ff1e04a" gracePeriod=2 Mar 14 06:06:13 crc kubenswrapper[4817]: I0314 06:06:13.047190 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j87wn"] Mar 14 06:06:13 crc kubenswrapper[4817]: I0314 06:06:13.057501 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j87wn"] Mar 14 06:06:13 crc kubenswrapper[4817]: I0314 06:06:13.888438 4817 generic.go:334] "Generic (PLEG): container finished" podID="78755665-f7cc-4b7c-830d-153fc21a70df" containerID="0d06e17939778cc06d8c7ca6d4120b64cb0d1d5183fd40a650b23d3c4ff1e04a" exitCode=0 Mar 14 06:06:13 crc kubenswrapper[4817]: I0314 06:06:13.888491 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsclx" event={"ID":"78755665-f7cc-4b7c-830d-153fc21a70df","Type":"ContainerDied","Data":"0d06e17939778cc06d8c7ca6d4120b64cb0d1d5183fd40a650b23d3c4ff1e04a"} Mar 14 06:06:14 crc kubenswrapper[4817]: I0314 06:06:14.746116 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b5af39-4042-4228-b1c4-5611d88b7256" path="/var/lib/kubelet/pods/42b5af39-4042-4228-b1c4-5611d88b7256/volumes" Mar 14 06:06:14 crc kubenswrapper[4817]: I0314 06:06:14.902830 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:06:14 crc kubenswrapper[4817]: I0314 06:06:14.903949 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xsclx" event={"ID":"78755665-f7cc-4b7c-830d-153fc21a70df","Type":"ContainerDied","Data":"6acca1318a7f64e5ea24937a2153a30f4bc863fea5a9cae4edef47412377d250"} Mar 14 06:06:14 crc kubenswrapper[4817]: I0314 06:06:14.904016 4817 scope.go:117] "RemoveContainer" containerID="0d06e17939778cc06d8c7ca6d4120b64cb0d1d5183fd40a650b23d3c4ff1e04a" Mar 14 06:06:14 crc kubenswrapper[4817]: I0314 06:06:14.938684 4817 scope.go:117] "RemoveContainer" containerID="d1b991b39c1bef4d7a850686b8c9520222a38a6071fd2dd2e11fa19f0b894eb7" Mar 14 06:06:14 crc kubenswrapper[4817]: I0314 06:06:14.987832 4817 scope.go:117] "RemoveContainer" containerID="94e93f059afb4de14b5438e38f84ed83a1bf51f17c441c41f66846a074a9fcc0" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.064452 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-utilities\") pod \"78755665-f7cc-4b7c-830d-153fc21a70df\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.065112 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g22qw\" (UniqueName: \"kubernetes.io/projected/78755665-f7cc-4b7c-830d-153fc21a70df-kube-api-access-g22qw\") pod \"78755665-f7cc-4b7c-830d-153fc21a70df\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.065257 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-catalog-content\") pod \"78755665-f7cc-4b7c-830d-153fc21a70df\" (UID: \"78755665-f7cc-4b7c-830d-153fc21a70df\") " Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.066041 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-utilities" (OuterVolumeSpecName: "utilities") pod "78755665-f7cc-4b7c-830d-153fc21a70df" (UID: "78755665-f7cc-4b7c-830d-153fc21a70df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.073359 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78755665-f7cc-4b7c-830d-153fc21a70df-kube-api-access-g22qw" (OuterVolumeSpecName: "kube-api-access-g22qw") pod "78755665-f7cc-4b7c-830d-153fc21a70df" (UID: "78755665-f7cc-4b7c-830d-153fc21a70df"). InnerVolumeSpecName "kube-api-access-g22qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.168077 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.168121 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g22qw\" (UniqueName: \"kubernetes.io/projected/78755665-f7cc-4b7c-830d-153fc21a70df-kube-api-access-g22qw\") on node \"crc\" DevicePath \"\"" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.200467 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78755665-f7cc-4b7c-830d-153fc21a70df" (UID: "78755665-f7cc-4b7c-830d-153fc21a70df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.270310 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78755665-f7cc-4b7c-830d-153fc21a70df-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.912938 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xsclx" Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.952358 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xsclx"] Mar 14 06:06:15 crc kubenswrapper[4817]: I0314 06:06:15.960964 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xsclx"] Mar 14 06:06:16 crc kubenswrapper[4817]: I0314 06:06:16.750390 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" path="/var/lib/kubelet/pods/78755665-f7cc-4b7c-830d-153fc21a70df/volumes" Mar 14 06:06:43 crc kubenswrapper[4817]: I0314 06:06:43.437355 4817 scope.go:117] "RemoveContainer" containerID="9546a08e19439822eab3177f338af1d2658d21f4ce07a9beac5f8e6e796ca770" Mar 14 06:06:43 crc kubenswrapper[4817]: I0314 06:06:43.487120 4817 scope.go:117] "RemoveContainer" containerID="fdb7ea26a0e3821e55461b3c6f4522bc9d645736b5270ae0cdd241a7fc0a6112" Mar 14 06:06:43 crc kubenswrapper[4817]: I0314 06:06:43.531620 4817 scope.go:117] "RemoveContainer" containerID="e46265dabb2dd2b68b37836133f920a0f45d7bd907ab51570000d4e3342ac7e1" Mar 14 06:06:43 crc kubenswrapper[4817]: I0314 06:06:43.598909 4817 scope.go:117] "RemoveContainer" containerID="e656645f2ac5d938e5ca3226051b23ba8627cf27dcdb9f4fd66f38d24b174670" Mar 14 06:06:55 crc kubenswrapper[4817]: I0314 06:06:55.041580 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sz6pp"] Mar 14 06:06:55 crc kubenswrapper[4817]: I0314 06:06:55.054175 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sz6pp"] Mar 14 06:06:56 crc kubenswrapper[4817]: I0314 06:06:56.744637 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45463be-ecf1-4c50-a812-29dd2e00dffe" path="/var/lib/kubelet/pods/d45463be-ecf1-4c50-a812-29dd2e00dffe/volumes" Mar 14 06:07:43 crc kubenswrapper[4817]: I0314 06:07:43.734561 4817 scope.go:117] "RemoveContainer" containerID="a462159a690487218798a332fdb6a0ebb6a9ce79a08931c5268282f010b8478d" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.160407 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557808-l7twk"] Mar 14 06:08:00 crc kubenswrapper[4817]: E0314 06:08:00.161882 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d307f5-0c34-4716-87f8-78cadbe917ca" containerName="oc" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.162006 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d307f5-0c34-4716-87f8-78cadbe917ca" containerName="oc" Mar 14 06:08:00 crc kubenswrapper[4817]: E0314 06:08:00.162025 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="registry-server" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.162031 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="registry-server" Mar 14 06:08:00 crc kubenswrapper[4817]: E0314 06:08:00.162046 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="extract-content" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.162054 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="extract-content" Mar 14 06:08:00 crc kubenswrapper[4817]: E0314 06:08:00.162072 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="extract-utilities" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.162079 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="extract-utilities" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.162291 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="78755665-f7cc-4b7c-830d-153fc21a70df" containerName="registry-server" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.162304 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d307f5-0c34-4716-87f8-78cadbe917ca" containerName="oc" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.163155 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.167665 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.168036 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.168317 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.176933 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557808-l7twk"] Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.225565 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz4xn\" (UniqueName: \"kubernetes.io/projected/83371b58-2667-4d13-a275-cf93548d7d0f-kube-api-access-bz4xn\") pod \"auto-csr-approver-29557808-l7twk\" (UID: \"83371b58-2667-4d13-a275-cf93548d7d0f\") " pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.328310 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz4xn\" (UniqueName: \"kubernetes.io/projected/83371b58-2667-4d13-a275-cf93548d7d0f-kube-api-access-bz4xn\") pod \"auto-csr-approver-29557808-l7twk\" (UID: \"83371b58-2667-4d13-a275-cf93548d7d0f\") " pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.348677 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz4xn\" (UniqueName: \"kubernetes.io/projected/83371b58-2667-4d13-a275-cf93548d7d0f-kube-api-access-bz4xn\") pod \"auto-csr-approver-29557808-l7twk\" (UID: \"83371b58-2667-4d13-a275-cf93548d7d0f\") " pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.505999 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.979340 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557808-l7twk"] Mar 14 06:08:00 crc kubenswrapper[4817]: I0314 06:08:00.987330 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:08:01 crc kubenswrapper[4817]: I0314 06:08:01.117704 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557808-l7twk" event={"ID":"83371b58-2667-4d13-a275-cf93548d7d0f","Type":"ContainerStarted","Data":"63f0071e5573d47a7190b94a5279aef23c225e31f3cf4cb7752f574af352d593"} Mar 14 06:08:03 crc kubenswrapper[4817]: I0314 06:08:03.140306 4817 generic.go:334] "Generic (PLEG): container finished" podID="83371b58-2667-4d13-a275-cf93548d7d0f" containerID="01bcc4033880bcc7c0164eeb69ab72dee709fb945677da0c5ecafadeb44e8bca" exitCode=0 Mar 14 06:08:03 crc kubenswrapper[4817]: I0314 06:08:03.140378 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557808-l7twk" event={"ID":"83371b58-2667-4d13-a275-cf93548d7d0f","Type":"ContainerDied","Data":"01bcc4033880bcc7c0164eeb69ab72dee709fb945677da0c5ecafadeb44e8bca"} Mar 14 06:08:04 crc kubenswrapper[4817]: I0314 06:08:04.714837 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:04 crc kubenswrapper[4817]: I0314 06:08:04.825494 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz4xn\" (UniqueName: \"kubernetes.io/projected/83371b58-2667-4d13-a275-cf93548d7d0f-kube-api-access-bz4xn\") pod \"83371b58-2667-4d13-a275-cf93548d7d0f\" (UID: \"83371b58-2667-4d13-a275-cf93548d7d0f\") " Mar 14 06:08:04 crc kubenswrapper[4817]: I0314 06:08:04.835716 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83371b58-2667-4d13-a275-cf93548d7d0f-kube-api-access-bz4xn" (OuterVolumeSpecName: "kube-api-access-bz4xn") pod "83371b58-2667-4d13-a275-cf93548d7d0f" (UID: "83371b58-2667-4d13-a275-cf93548d7d0f"). InnerVolumeSpecName "kube-api-access-bz4xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:08:04 crc kubenswrapper[4817]: I0314 06:08:04.928326 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz4xn\" (UniqueName: \"kubernetes.io/projected/83371b58-2667-4d13-a275-cf93548d7d0f-kube-api-access-bz4xn\") on node \"crc\" DevicePath \"\"" Mar 14 06:08:05 crc kubenswrapper[4817]: I0314 06:08:05.176817 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557808-l7twk" event={"ID":"83371b58-2667-4d13-a275-cf93548d7d0f","Type":"ContainerDied","Data":"63f0071e5573d47a7190b94a5279aef23c225e31f3cf4cb7752f574af352d593"} Mar 14 06:08:05 crc kubenswrapper[4817]: I0314 06:08:05.176870 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f0071e5573d47a7190b94a5279aef23c225e31f3cf4cb7752f574af352d593" Mar 14 06:08:05 crc kubenswrapper[4817]: I0314 06:08:05.176969 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557808-l7twk" Mar 14 06:08:05 crc kubenswrapper[4817]: I0314 06:08:05.789695 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557802-7s5sg"] Mar 14 06:08:05 crc kubenswrapper[4817]: I0314 06:08:05.797341 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557802-7s5sg"] Mar 14 06:08:06 crc kubenswrapper[4817]: I0314 06:08:06.745716 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d97304-d6ea-4fc8-a0f4-d952e4748476" path="/var/lib/kubelet/pods/f9d97304-d6ea-4fc8-a0f4-d952e4748476/volumes" Mar 14 06:08:08 crc kubenswrapper[4817]: I0314 06:08:08.565259 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:08:08 crc kubenswrapper[4817]: I0314 06:08:08.565583 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:08:38 crc kubenswrapper[4817]: I0314 06:08:38.565639 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:08:38 crc kubenswrapper[4817]: I0314 06:08:38.566337 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:08:43 crc kubenswrapper[4817]: I0314 06:08:43.829960 4817 scope.go:117] "RemoveContainer" containerID="3506ac538c8389e40884db0a6112f4fee5e6ebc98588b0b8f1e1f126b6992b08" Mar 14 06:08:45 crc kubenswrapper[4817]: I0314 06:08:45.959948 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9r458"] Mar 14 06:08:45 crc kubenswrapper[4817]: E0314 06:08:45.960874 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83371b58-2667-4d13-a275-cf93548d7d0f" containerName="oc" Mar 14 06:08:45 crc kubenswrapper[4817]: I0314 06:08:45.960908 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="83371b58-2667-4d13-a275-cf93548d7d0f" containerName="oc" Mar 14 06:08:45 crc kubenswrapper[4817]: I0314 06:08:45.961122 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="83371b58-2667-4d13-a275-cf93548d7d0f" containerName="oc" Mar 14 06:08:45 crc kubenswrapper[4817]: I0314 06:08:45.962648 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:45 crc kubenswrapper[4817]: I0314 06:08:45.976706 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9r458"] Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.044284 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-utilities\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.044385 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdkhl\" (UniqueName: \"kubernetes.io/projected/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-kube-api-access-bdkhl\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.044411 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-catalog-content\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.146119 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-utilities\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.146248 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdkhl\" (UniqueName: \"kubernetes.io/projected/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-kube-api-access-bdkhl\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.146282 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-catalog-content\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.146806 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-utilities\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.147035 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-catalog-content\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.168756 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdkhl\" (UniqueName: \"kubernetes.io/projected/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-kube-api-access-bdkhl\") pod \"certified-operators-9r458\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.314525 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:46 crc kubenswrapper[4817]: I0314 06:08:46.855838 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9r458"] Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.350627 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hc6gv"] Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.354807 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.359932 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hc6gv"] Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.472450 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-catalog-content\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.472511 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-utilities\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.472568 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw5h\" (UniqueName: \"kubernetes.io/projected/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-kube-api-access-4mw5h\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.574935 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw5h\" (UniqueName: \"kubernetes.io/projected/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-kube-api-access-4mw5h\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.575105 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-catalog-content\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.575141 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-utilities\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.575857 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-utilities\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.575910 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-catalog-content\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.603844 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw5h\" (UniqueName: \"kubernetes.io/projected/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-kube-api-access-4mw5h\") pod \"community-operators-hc6gv\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.633062 4817 generic.go:334] "Generic (PLEG): container finished" podID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerID="5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106" exitCode=0 Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.633106 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r458" event={"ID":"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919","Type":"ContainerDied","Data":"5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106"} Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.633150 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r458" event={"ID":"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919","Type":"ContainerStarted","Data":"e72fadf5e51f58911b34b6f41fb12aa54b38b603420dd3359da4373e3064a6b1"} Mar 14 06:08:47 crc kubenswrapper[4817]: I0314 06:08:47.682588 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:48 crc kubenswrapper[4817]: I0314 06:08:48.325926 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hc6gv"] Mar 14 06:08:48 crc kubenswrapper[4817]: I0314 06:08:48.658402 4817 generic.go:334] "Generic (PLEG): container finished" podID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerID="80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7" exitCode=0 Mar 14 06:08:48 crc kubenswrapper[4817]: I0314 06:08:48.658501 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerDied","Data":"80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7"} Mar 14 06:08:48 crc kubenswrapper[4817]: I0314 06:08:48.658563 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerStarted","Data":"584bc386a25b14002b7cb454961f9acb23b946ef25703142c698fff355ed1c6e"} Mar 14 06:08:49 crc kubenswrapper[4817]: I0314 06:08:49.670887 4817 generic.go:334] "Generic (PLEG): container finished" podID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerID="e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6" exitCode=0 Mar 14 06:08:49 crc kubenswrapper[4817]: I0314 06:08:49.670969 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r458" event={"ID":"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919","Type":"ContainerDied","Data":"e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6"} Mar 14 06:08:49 crc kubenswrapper[4817]: I0314 06:08:49.678212 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerStarted","Data":"1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47"} Mar 14 06:08:50 crc kubenswrapper[4817]: I0314 06:08:50.691475 4817 generic.go:334] "Generic (PLEG): container finished" podID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerID="1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47" exitCode=0 Mar 14 06:08:50 crc kubenswrapper[4817]: I0314 06:08:50.691691 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerDied","Data":"1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47"} Mar 14 06:08:50 crc kubenswrapper[4817]: I0314 06:08:50.709238 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r458" event={"ID":"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919","Type":"ContainerStarted","Data":"64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8"} Mar 14 06:08:50 crc kubenswrapper[4817]: I0314 06:08:50.751054 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9r458" podStartSLOduration=3.292999425 podStartE2EDuration="5.751027658s" podCreationTimestamp="2026-03-14 06:08:45 +0000 UTC" firstStartedPulling="2026-03-14 06:08:47.636002863 +0000 UTC m=+2181.674263609" lastFinishedPulling="2026-03-14 06:08:50.094031056 +0000 UTC m=+2184.132291842" observedRunningTime="2026-03-14 06:08:50.746296283 +0000 UTC m=+2184.784557039" watchObservedRunningTime="2026-03-14 06:08:50.751027658 +0000 UTC m=+2184.789288424" Mar 14 06:08:51 crc kubenswrapper[4817]: I0314 06:08:51.721238 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerStarted","Data":"629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc"} Mar 14 06:08:51 crc kubenswrapper[4817]: I0314 06:08:51.754127 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hc6gv" podStartSLOduration=2.270399045 podStartE2EDuration="4.75410372s" podCreationTimestamp="2026-03-14 06:08:47 +0000 UTC" firstStartedPulling="2026-03-14 06:08:48.662008129 +0000 UTC m=+2182.700268875" lastFinishedPulling="2026-03-14 06:08:51.145712804 +0000 UTC m=+2185.183973550" observedRunningTime="2026-03-14 06:08:51.743395525 +0000 UTC m=+2185.781656281" watchObservedRunningTime="2026-03-14 06:08:51.75410372 +0000 UTC m=+2185.792364466" Mar 14 06:08:56 crc kubenswrapper[4817]: I0314 06:08:56.314691 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:56 crc kubenswrapper[4817]: I0314 06:08:56.315382 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:56 crc kubenswrapper[4817]: I0314 06:08:56.401872 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:56 crc kubenswrapper[4817]: I0314 06:08:56.833862 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:57 crc kubenswrapper[4817]: I0314 06:08:57.683963 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:57 crc kubenswrapper[4817]: I0314 06:08:57.684545 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:57 crc kubenswrapper[4817]: I0314 06:08:57.751932 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:57 crc kubenswrapper[4817]: I0314 06:08:57.871184 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:08:58 crc kubenswrapper[4817]: I0314 06:08:58.726575 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9r458"] Mar 14 06:08:58 crc kubenswrapper[4817]: I0314 06:08:58.812329 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9r458" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="registry-server" containerID="cri-o://64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8" gracePeriod=2 Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.271540 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.354386 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdkhl\" (UniqueName: \"kubernetes.io/projected/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-kube-api-access-bdkhl\") pod \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.354535 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-utilities\") pod \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.354754 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-catalog-content\") pod \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\" (UID: \"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919\") " Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.355610 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-utilities" (OuterVolumeSpecName: "utilities") pod "d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" (UID: "d15ef173-a2b9-4a41-ba64-e5d1c4aa2919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.363197 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-kube-api-access-bdkhl" (OuterVolumeSpecName: "kube-api-access-bdkhl") pod "d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" (UID: "d15ef173-a2b9-4a41-ba64-e5d1c4aa2919"). InnerVolumeSpecName "kube-api-access-bdkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.415182 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" (UID: "d15ef173-a2b9-4a41-ba64-e5d1c4aa2919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.457554 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdkhl\" (UniqueName: \"kubernetes.io/projected/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-kube-api-access-bdkhl\") on node \"crc\" DevicePath \"\"" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.457592 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.457605 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.831292 4817 generic.go:334] "Generic (PLEG): container finished" podID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerID="64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8" exitCode=0 Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.831682 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r458" event={"ID":"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919","Type":"ContainerDied","Data":"64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8"} Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.831722 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r458" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.831752 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r458" event={"ID":"d15ef173-a2b9-4a41-ba64-e5d1c4aa2919","Type":"ContainerDied","Data":"e72fadf5e51f58911b34b6f41fb12aa54b38b603420dd3359da4373e3064a6b1"} Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.831776 4817 scope.go:117] "RemoveContainer" containerID="64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.873391 4817 scope.go:117] "RemoveContainer" containerID="e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.881083 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9r458"] Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.910252 4817 scope.go:117] "RemoveContainer" containerID="5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.921456 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9r458"] Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.954685 4817 scope.go:117] "RemoveContainer" containerID="64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8" Mar 14 06:08:59 crc kubenswrapper[4817]: E0314 06:08:59.955471 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8\": container with ID starting with 64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8 not found: ID does not exist" containerID="64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.955540 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8"} err="failed to get container status \"64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8\": rpc error: code = NotFound desc = could not find container \"64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8\": container with ID starting with 64faf95a791080fd75ac4163a8d83f3b0b8e724a32996e15d6075c4336a056c8 not found: ID does not exist" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.955578 4817 scope.go:117] "RemoveContainer" containerID="e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6" Mar 14 06:08:59 crc kubenswrapper[4817]: E0314 06:08:59.956218 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6\": container with ID starting with e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6 not found: ID does not exist" containerID="e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.956274 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6"} err="failed to get container status \"e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6\": rpc error: code = NotFound desc = could not find container \"e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6\": container with ID starting with e3b9b8005c4bf0198762ad8d6ba7fe6cd5279179b936fb8433cfbcd207812ea6 not found: ID does not exist" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.956311 4817 scope.go:117] "RemoveContainer" containerID="5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106" Mar 14 06:08:59 crc kubenswrapper[4817]: E0314 06:08:59.956727 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106\": container with ID starting with 5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106 not found: ID does not exist" containerID="5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106" Mar 14 06:08:59 crc kubenswrapper[4817]: I0314 06:08:59.956759 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106"} err="failed to get container status \"5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106\": rpc error: code = NotFound desc = could not find container \"5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106\": container with ID starting with 5fa00da575f7f5003a77195cb47b9298565cf8a5b9647f8aaa1132ea6f5b4106 not found: ID does not exist" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.129421 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hc6gv"] Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.130040 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hc6gv" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="registry-server" containerID="cri-o://629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc" gracePeriod=2 Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.582526 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.680771 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mw5h\" (UniqueName: \"kubernetes.io/projected/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-kube-api-access-4mw5h\") pod \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.681201 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-utilities\") pod \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.681462 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-catalog-content\") pod \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\" (UID: \"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41\") " Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.682186 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-utilities" (OuterVolumeSpecName: "utilities") pod "91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" (UID: "91f15a9f-d3fb-451b-bdd2-ffcc5678ce41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.689560 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-kube-api-access-4mw5h" (OuterVolumeSpecName: "kube-api-access-4mw5h") pod "91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" (UID: "91f15a9f-d3fb-451b-bdd2-ffcc5678ce41"). InnerVolumeSpecName "kube-api-access-4mw5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.742750 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" path="/var/lib/kubelet/pods/d15ef173-a2b9-4a41-ba64-e5d1c4aa2919/volumes" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.745193 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" (UID: "91f15a9f-d3fb-451b-bdd2-ffcc5678ce41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.783787 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.784011 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mw5h\" (UniqueName: \"kubernetes.io/projected/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-kube-api-access-4mw5h\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.784088 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.846614 4817 generic.go:334] "Generic (PLEG): container finished" podID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerID="629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc" exitCode=0 Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.846764 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerDied","Data":"629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc"} Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.846825 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hc6gv" event={"ID":"91f15a9f-d3fb-451b-bdd2-ffcc5678ce41","Type":"ContainerDied","Data":"584bc386a25b14002b7cb454961f9acb23b946ef25703142c698fff355ed1c6e"} Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.846871 4817 scope.go:117] "RemoveContainer" containerID="629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.850576 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hc6gv" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.905058 4817 scope.go:117] "RemoveContainer" containerID="1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.908541 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hc6gv"] Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.923128 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hc6gv"] Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.937640 4817 scope.go:117] "RemoveContainer" containerID="80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.985910 4817 scope.go:117] "RemoveContainer" containerID="629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc" Mar 14 06:09:00 crc kubenswrapper[4817]: E0314 06:09:00.986488 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc\": container with ID starting with 629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc not found: ID does not exist" containerID="629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.986528 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc"} err="failed to get container status \"629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc\": rpc error: code = NotFound desc = could not find container \"629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc\": container with ID starting with 629c6caca7a3148ee6e9cfc434553cb5773342c31e967b309e48c08f1c7d62bc not found: ID does not exist" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.986559 4817 scope.go:117] "RemoveContainer" containerID="1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47" Mar 14 06:09:00 crc kubenswrapper[4817]: E0314 06:09:00.987160 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47\": container with ID starting with 1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47 not found: ID does not exist" containerID="1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.987200 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47"} err="failed to get container status \"1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47\": rpc error: code = NotFound desc = could not find container \"1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47\": container with ID starting with 1908006467ffda0c740e817103483d77698f65ac57b91f83f5c041abe4123d47 not found: ID does not exist" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.987229 4817 scope.go:117] "RemoveContainer" containerID="80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7" Mar 14 06:09:00 crc kubenswrapper[4817]: E0314 06:09:00.987502 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7\": container with ID starting with 80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7 not found: ID does not exist" containerID="80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7" Mar 14 06:09:00 crc kubenswrapper[4817]: I0314 06:09:00.987526 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7"} err="failed to get container status \"80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7\": rpc error: code = NotFound desc = could not find container \"80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7\": container with ID starting with 80abb645746b726c73e0054f88ea1af5e816ec5d2b5a8392cd230b08e7887fb7 not found: ID does not exist" Mar 14 06:09:02 crc kubenswrapper[4817]: I0314 06:09:02.752175 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" path="/var/lib/kubelet/pods/91f15a9f-d3fb-451b-bdd2-ffcc5678ce41/volumes" Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.565788 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.567202 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.567349 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.568855 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dab7efbef397f9b73d3e4fc38d0c4c45ca0a8ca618353f78100fd0231d68820"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.568981 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://3dab7efbef397f9b73d3e4fc38d0c4c45ca0a8ca618353f78100fd0231d68820" gracePeriod=600 Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.786763 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.803577 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dkgx2"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.813602 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.824070 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.832383 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dkgx2"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.838921 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ncsx7"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.845609 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2hmzb"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.855255 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mkkdf"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.864371 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.875422 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.884558 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.893582 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.902174 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.912124 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jdwfr"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.919205 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-dlp59"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.922335 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="3dab7efbef397f9b73d3e4fc38d0c4c45ca0a8ca618353f78100fd0231d68820" exitCode=0 Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.922383 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"3dab7efbef397f9b73d3e4fc38d0c4c45ca0a8ca618353f78100fd0231d68820"} Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.922427 4817 scope.go:117] "RemoveContainer" containerID="ded2b952b853f84de5b3431f62c3af96c81e79674fb9ad5871ac156c841efad2" Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.926652 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-r4cfs"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.934173 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.940788 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vf6xc"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.947407 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fq2cw"] Mar 14 06:09:08 crc kubenswrapper[4817]: I0314 06:09:08.953088 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4c4hz"] Mar 14 06:09:09 crc kubenswrapper[4817]: I0314 06:09:09.937749 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18"} Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.766727 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d81081-d17c-4d84-8c1c-c7a41cf680be" path="/var/lib/kubelet/pods/03d81081-d17c-4d84-8c1c-c7a41cf680be/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.767992 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05416e96-e39f-4dfb-abd6-d74c6fe66eb7" path="/var/lib/kubelet/pods/05416e96-e39f-4dfb-abd6-d74c6fe66eb7/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.768733 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc1865c-4df2-4c2d-a470-a2c8d02dec60" path="/var/lib/kubelet/pods/0fc1865c-4df2-4c2d-a470-a2c8d02dec60/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.769401 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53" path="/var/lib/kubelet/pods/2b9ce00f-9e89-4eb6-96a3-d2215ad6ae53/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.771314 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9960c053-42b4-4c33-abfb-ec56901d0f02" path="/var/lib/kubelet/pods/9960c053-42b4-4c33-abfb-ec56901d0f02/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.772301 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b985670-a3bb-4996-9742-801968709eb8" path="/var/lib/kubelet/pods/9b985670-a3bb-4996-9742-801968709eb8/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.773163 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0" path="/var/lib/kubelet/pods/a8d5703c-e5f7-44b1-bdf6-b9220f9dd2f0/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.775162 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdedab8b-b434-46b2-be3e-e2fc9be5119e" path="/var/lib/kubelet/pods/bdedab8b-b434-46b2-be3e-e2fc9be5119e/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.776071 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7119be8-fb6d-4bb9-8604-10d36abd643f" path="/var/lib/kubelet/pods/e7119be8-fb6d-4bb9-8604-10d36abd643f/volumes" Mar 14 06:09:10 crc kubenswrapper[4817]: I0314 06:09:10.776781 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9" path="/var/lib/kubelet/pods/f2bb7743-c6e8-46d1-8d2f-8e8c7d0210f9/volumes" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.247009 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst"] Mar 14 06:09:14 crc kubenswrapper[4817]: E0314 06:09:14.249282 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="extract-content" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.249379 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="extract-content" Mar 14 06:09:14 crc kubenswrapper[4817]: E0314 06:09:14.249450 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="extract-utilities" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.249507 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="extract-utilities" Mar 14 06:09:14 crc kubenswrapper[4817]: E0314 06:09:14.249560 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="extract-utilities" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.249612 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="extract-utilities" Mar 14 06:09:14 crc kubenswrapper[4817]: E0314 06:09:14.249677 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="registry-server" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.249737 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="registry-server" Mar 14 06:09:14 crc kubenswrapper[4817]: E0314 06:09:14.249809 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="extract-content" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.249862 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="extract-content" Mar 14 06:09:14 crc kubenswrapper[4817]: E0314 06:09:14.250062 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="registry-server" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.250118 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="registry-server" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.250350 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f15a9f-d3fb-451b-bdd2-ffcc5678ce41" containerName="registry-server" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.250434 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15ef173-a2b9-4a41-ba64-e5d1c4aa2919" containerName="registry-server" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.251696 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.254813 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.256576 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.257424 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.257703 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.258146 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst"] Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.260053 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.393937 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.394011 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.394041 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwbf\" (UniqueName: \"kubernetes.io/projected/b9ce930d-a273-4240-ad64-19c4d50a3ec6-kube-api-access-zmwbf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.394239 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.394359 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.496212 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.496275 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.496332 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.496368 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwbf\" (UniqueName: \"kubernetes.io/projected/b9ce930d-a273-4240-ad64-19c4d50a3ec6-kube-api-access-zmwbf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.496466 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.503763 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.503988 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.504101 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.505042 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.521649 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwbf\" (UniqueName: \"kubernetes.io/projected/b9ce930d-a273-4240-ad64-19c4d50a3ec6-kube-api-access-zmwbf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-76mst\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:14 crc kubenswrapper[4817]: I0314 06:09:14.593106 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:15 crc kubenswrapper[4817]: I0314 06:09:15.172341 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst"] Mar 14 06:09:15 crc kubenswrapper[4817]: W0314 06:09:15.177871 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9ce930d_a273_4240_ad64_19c4d50a3ec6.slice/crio-c5eaeef8fc1f52e97360c7ae5a95b70b468145d60be7ddca6db42db599e9a95c WatchSource:0}: Error finding container c5eaeef8fc1f52e97360c7ae5a95b70b468145d60be7ddca6db42db599e9a95c: Status 404 returned error can't find the container with id c5eaeef8fc1f52e97360c7ae5a95b70b468145d60be7ddca6db42db599e9a95c Mar 14 06:09:16 crc kubenswrapper[4817]: I0314 06:09:16.004685 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" event={"ID":"b9ce930d-a273-4240-ad64-19c4d50a3ec6","Type":"ContainerStarted","Data":"748213f97a25dd0aaec2bacf884fd254addfa018f8a8c02b674d8a0be9039e7e"} Mar 14 06:09:16 crc kubenswrapper[4817]: I0314 06:09:16.004754 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" event={"ID":"b9ce930d-a273-4240-ad64-19c4d50a3ec6","Type":"ContainerStarted","Data":"c5eaeef8fc1f52e97360c7ae5a95b70b468145d60be7ddca6db42db599e9a95c"} Mar 14 06:09:16 crc kubenswrapper[4817]: I0314 06:09:16.027855 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" podStartSLOduration=1.53160168 podStartE2EDuration="2.027831345s" podCreationTimestamp="2026-03-14 06:09:14 +0000 UTC" firstStartedPulling="2026-03-14 06:09:15.183037001 +0000 UTC m=+2209.221297737" lastFinishedPulling="2026-03-14 06:09:15.679266656 +0000 UTC m=+2209.717527402" observedRunningTime="2026-03-14 06:09:16.021403881 +0000 UTC m=+2210.059664647" watchObservedRunningTime="2026-03-14 06:09:16.027831345 +0000 UTC m=+2210.066092111" Mar 14 06:09:27 crc kubenswrapper[4817]: I0314 06:09:27.125509 4817 generic.go:334] "Generic (PLEG): container finished" podID="b9ce930d-a273-4240-ad64-19c4d50a3ec6" containerID="748213f97a25dd0aaec2bacf884fd254addfa018f8a8c02b674d8a0be9039e7e" exitCode=0 Mar 14 06:09:27 crc kubenswrapper[4817]: I0314 06:09:27.125589 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" event={"ID":"b9ce930d-a273-4240-ad64-19c4d50a3ec6","Type":"ContainerDied","Data":"748213f97a25dd0aaec2bacf884fd254addfa018f8a8c02b674d8a0be9039e7e"} Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.615616 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.718413 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-repo-setup-combined-ca-bundle\") pod \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.718601 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-inventory\") pod \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.718655 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ceph\") pod \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.718681 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmwbf\" (UniqueName: \"kubernetes.io/projected/b9ce930d-a273-4240-ad64-19c4d50a3ec6-kube-api-access-zmwbf\") pod \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.718715 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ssh-key-openstack-edpm-ipam\") pod \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\" (UID: \"b9ce930d-a273-4240-ad64-19c4d50a3ec6\") " Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.724459 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b9ce930d-a273-4240-ad64-19c4d50a3ec6" (UID: "b9ce930d-a273-4240-ad64-19c4d50a3ec6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.724506 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ce930d-a273-4240-ad64-19c4d50a3ec6-kube-api-access-zmwbf" (OuterVolumeSpecName: "kube-api-access-zmwbf") pod "b9ce930d-a273-4240-ad64-19c4d50a3ec6" (UID: "b9ce930d-a273-4240-ad64-19c4d50a3ec6"). InnerVolumeSpecName "kube-api-access-zmwbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.725009 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ceph" (OuterVolumeSpecName: "ceph") pod "b9ce930d-a273-4240-ad64-19c4d50a3ec6" (UID: "b9ce930d-a273-4240-ad64-19c4d50a3ec6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.767471 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b9ce930d-a273-4240-ad64-19c4d50a3ec6" (UID: "b9ce930d-a273-4240-ad64-19c4d50a3ec6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.773953 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-inventory" (OuterVolumeSpecName: "inventory") pod "b9ce930d-a273-4240-ad64-19c4d50a3ec6" (UID: "b9ce930d-a273-4240-ad64-19c4d50a3ec6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.821258 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.821304 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.821318 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmwbf\" (UniqueName: \"kubernetes.io/projected/b9ce930d-a273-4240-ad64-19c4d50a3ec6-kube-api-access-zmwbf\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.821336 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:28 crc kubenswrapper[4817]: I0314 06:09:28.821348 4817 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce930d-a273-4240-ad64-19c4d50a3ec6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.144746 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" event={"ID":"b9ce930d-a273-4240-ad64-19c4d50a3ec6","Type":"ContainerDied","Data":"c5eaeef8fc1f52e97360c7ae5a95b70b468145d60be7ddca6db42db599e9a95c"} Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.145141 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5eaeef8fc1f52e97360c7ae5a95b70b468145d60be7ddca6db42db599e9a95c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.144864 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-76mst" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.259141 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c"] Mar 14 06:09:29 crc kubenswrapper[4817]: E0314 06:09:29.259586 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ce930d-a273-4240-ad64-19c4d50a3ec6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.259607 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ce930d-a273-4240-ad64-19c4d50a3ec6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.259814 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ce930d-a273-4240-ad64-19c4d50a3ec6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.262962 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.268482 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.268986 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.269179 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.269393 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.269560 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.269716 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c"] Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.331020 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.331081 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.331116 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdl7\" (UniqueName: \"kubernetes.io/projected/03575d81-89e3-4d1a-a27a-5aad81319453-kube-api-access-ksdl7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.331143 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.331161 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.432944 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.433096 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.433167 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksdl7\" (UniqueName: \"kubernetes.io/projected/03575d81-89e3-4d1a-a27a-5aad81319453-kube-api-access-ksdl7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.433231 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.433271 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.439174 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.439199 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.439398 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.439721 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.452961 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksdl7\" (UniqueName: \"kubernetes.io/projected/03575d81-89e3-4d1a-a27a-5aad81319453-kube-api-access-ksdl7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:29 crc kubenswrapper[4817]: I0314 06:09:29.599237 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:09:30 crc kubenswrapper[4817]: I0314 06:09:30.195406 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c"] Mar 14 06:09:30 crc kubenswrapper[4817]: W0314 06:09:30.208441 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03575d81_89e3_4d1a_a27a_5aad81319453.slice/crio-c8eba5a0e973fe36dcf4872ef9ad9304d2cb75671fd13e6dfbec2d1588b1b5d2 WatchSource:0}: Error finding container c8eba5a0e973fe36dcf4872ef9ad9304d2cb75671fd13e6dfbec2d1588b1b5d2: Status 404 returned error can't find the container with id c8eba5a0e973fe36dcf4872ef9ad9304d2cb75671fd13e6dfbec2d1588b1b5d2 Mar 14 06:09:31 crc kubenswrapper[4817]: I0314 06:09:31.164410 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" event={"ID":"03575d81-89e3-4d1a-a27a-5aad81319453","Type":"ContainerStarted","Data":"e450d547fa59d24bfe15e0bcc617576ccb4b180e549f5ba74d33796465653a3f"} Mar 14 06:09:31 crc kubenswrapper[4817]: I0314 06:09:31.164770 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" event={"ID":"03575d81-89e3-4d1a-a27a-5aad81319453","Type":"ContainerStarted","Data":"c8eba5a0e973fe36dcf4872ef9ad9304d2cb75671fd13e6dfbec2d1588b1b5d2"} Mar 14 06:09:31 crc kubenswrapper[4817]: I0314 06:09:31.189314 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" podStartSLOduration=1.797893852 podStartE2EDuration="2.189290545s" podCreationTimestamp="2026-03-14 06:09:29 +0000 UTC" firstStartedPulling="2026-03-14 06:09:30.2120346 +0000 UTC m=+2224.250295356" lastFinishedPulling="2026-03-14 06:09:30.603431303 +0000 UTC m=+2224.641692049" observedRunningTime="2026-03-14 06:09:31.18630677 +0000 UTC m=+2225.224567516" watchObservedRunningTime="2026-03-14 06:09:31.189290545 +0000 UTC m=+2225.227551291" Mar 14 06:09:43 crc kubenswrapper[4817]: I0314 06:09:43.928576 4817 scope.go:117] "RemoveContainer" containerID="42858a84851a16ccec6a65212c0b5f2423a3cab6bd4999a8ad99a98c34c31b1a" Mar 14 06:09:43 crc kubenswrapper[4817]: I0314 06:09:43.970461 4817 scope.go:117] "RemoveContainer" containerID="674e054e82a8e88a0b6aa731b4eae7f804c9dc4e2ad6c1f2e82fc15d2a7a00d8" Mar 14 06:09:44 crc kubenswrapper[4817]: I0314 06:09:44.077819 4817 scope.go:117] "RemoveContainer" containerID="0a9f79e67c822fc44510cec292a4557c4c06aae94b541f5eb08fa031e968c9e9" Mar 14 06:09:44 crc kubenswrapper[4817]: I0314 06:09:44.119503 4817 scope.go:117] "RemoveContainer" containerID="0e3b31413bf75cba0efb8fe57badbf63fa7a138661e1b6656e0f5e4adafede6d" Mar 14 06:09:44 crc kubenswrapper[4817]: I0314 06:09:44.186202 4817 scope.go:117] "RemoveContainer" containerID="e73a24702cb71ef3d14e4039590d785e78ccb9bd82233b0e575f9a5021117c89" Mar 14 06:09:44 crc kubenswrapper[4817]: I0314 06:09:44.225803 4817 scope.go:117] "RemoveContainer" containerID="0cdbc84f97fc936be0e3367287c9fe3e0f333bb79c22911c6eca400b13cb236f" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.104087 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kq2bb"] Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.106970 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.154024 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq2bb"] Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.234789 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-catalog-content\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.234854 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-utilities\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.234988 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw55l\" (UniqueName: \"kubernetes.io/projected/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-kube-api-access-bw55l\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.337626 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-catalog-content\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.337711 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-utilities\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.337806 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw55l\" (UniqueName: \"kubernetes.io/projected/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-kube-api-access-bw55l\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.338342 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-utilities\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.338362 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-catalog-content\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.360914 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw55l\" (UniqueName: \"kubernetes.io/projected/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-kube-api-access-bw55l\") pod \"redhat-marketplace-kq2bb\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.439592 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:09:55 crc kubenswrapper[4817]: I0314 06:09:55.884429 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq2bb"] Mar 14 06:09:56 crc kubenswrapper[4817]: I0314 06:09:56.469026 4817 generic.go:334] "Generic (PLEG): container finished" podID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerID="bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4" exitCode=0 Mar 14 06:09:56 crc kubenswrapper[4817]: I0314 06:09:56.469085 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq2bb" event={"ID":"a29a77fb-aac5-41e6-9cd3-1272b72bf54e","Type":"ContainerDied","Data":"bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4"} Mar 14 06:09:56 crc kubenswrapper[4817]: I0314 06:09:56.469325 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq2bb" event={"ID":"a29a77fb-aac5-41e6-9cd3-1272b72bf54e","Type":"ContainerStarted","Data":"46da5a905b560dff988ed7a3447c390393757c516b69849dd21bfa968640b30f"} Mar 14 06:09:57 crc kubenswrapper[4817]: I0314 06:09:57.495920 4817 generic.go:334] "Generic (PLEG): container finished" podID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerID="b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b" exitCode=0 Mar 14 06:09:57 crc kubenswrapper[4817]: I0314 06:09:57.495998 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq2bb" event={"ID":"a29a77fb-aac5-41e6-9cd3-1272b72bf54e","Type":"ContainerDied","Data":"b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b"} Mar 14 06:09:58 crc kubenswrapper[4817]: I0314 06:09:58.509256 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq2bb" event={"ID":"a29a77fb-aac5-41e6-9cd3-1272b72bf54e","Type":"ContainerStarted","Data":"60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9"} Mar 14 06:09:58 crc kubenswrapper[4817]: I0314 06:09:58.536241 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kq2bb" podStartSLOduration=2.117984379 podStartE2EDuration="3.536212341s" podCreationTimestamp="2026-03-14 06:09:55 +0000 UTC" firstStartedPulling="2026-03-14 06:09:56.470997291 +0000 UTC m=+2250.509258037" lastFinishedPulling="2026-03-14 06:09:57.889225243 +0000 UTC m=+2251.927485999" observedRunningTime="2026-03-14 06:09:58.53373894 +0000 UTC m=+2252.571999706" watchObservedRunningTime="2026-03-14 06:09:58.536212341 +0000 UTC m=+2252.574473097" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.152274 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557810-s8npb"] Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.154117 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.159280 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.159501 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.159688 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.164327 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557810-s8npb"] Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.245701 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kl4k\" (UniqueName: \"kubernetes.io/projected/e6729a15-1d9c-4f1f-961a-07b401796464-kube-api-access-5kl4k\") pod \"auto-csr-approver-29557810-s8npb\" (UID: \"e6729a15-1d9c-4f1f-961a-07b401796464\") " pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.348515 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kl4k\" (UniqueName: \"kubernetes.io/projected/e6729a15-1d9c-4f1f-961a-07b401796464-kube-api-access-5kl4k\") pod \"auto-csr-approver-29557810-s8npb\" (UID: \"e6729a15-1d9c-4f1f-961a-07b401796464\") " pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.373255 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kl4k\" (UniqueName: \"kubernetes.io/projected/e6729a15-1d9c-4f1f-961a-07b401796464-kube-api-access-5kl4k\") pod \"auto-csr-approver-29557810-s8npb\" (UID: \"e6729a15-1d9c-4f1f-961a-07b401796464\") " pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.531399 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:00 crc kubenswrapper[4817]: I0314 06:10:00.989490 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557810-s8npb"] Mar 14 06:10:01 crc kubenswrapper[4817]: I0314 06:10:01.539402 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557810-s8npb" event={"ID":"e6729a15-1d9c-4f1f-961a-07b401796464","Type":"ContainerStarted","Data":"1f3db40b401a2175c72ab824c7a3ea437989b073f426860adcc14d3ec992a194"} Mar 14 06:10:02 crc kubenswrapper[4817]: I0314 06:10:02.550527 4817 generic.go:334] "Generic (PLEG): container finished" podID="e6729a15-1d9c-4f1f-961a-07b401796464" containerID="d28077df329f8a7ca712d04e7e05ebd2531e7d74325276fc490c34dfb5cd39a0" exitCode=0 Mar 14 06:10:02 crc kubenswrapper[4817]: I0314 06:10:02.550633 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557810-s8npb" event={"ID":"e6729a15-1d9c-4f1f-961a-07b401796464","Type":"ContainerDied","Data":"d28077df329f8a7ca712d04e7e05ebd2531e7d74325276fc490c34dfb5cd39a0"} Mar 14 06:10:03 crc kubenswrapper[4817]: I0314 06:10:03.895432 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.036327 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kl4k\" (UniqueName: \"kubernetes.io/projected/e6729a15-1d9c-4f1f-961a-07b401796464-kube-api-access-5kl4k\") pod \"e6729a15-1d9c-4f1f-961a-07b401796464\" (UID: \"e6729a15-1d9c-4f1f-961a-07b401796464\") " Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.043967 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6729a15-1d9c-4f1f-961a-07b401796464-kube-api-access-5kl4k" (OuterVolumeSpecName: "kube-api-access-5kl4k") pod "e6729a15-1d9c-4f1f-961a-07b401796464" (UID: "e6729a15-1d9c-4f1f-961a-07b401796464"). InnerVolumeSpecName "kube-api-access-5kl4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.140319 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kl4k\" (UniqueName: \"kubernetes.io/projected/e6729a15-1d9c-4f1f-961a-07b401796464-kube-api-access-5kl4k\") on node \"crc\" DevicePath \"\"" Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.592152 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557810-s8npb" event={"ID":"e6729a15-1d9c-4f1f-961a-07b401796464","Type":"ContainerDied","Data":"1f3db40b401a2175c72ab824c7a3ea437989b073f426860adcc14d3ec992a194"} Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.592222 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3db40b401a2175c72ab824c7a3ea437989b073f426860adcc14d3ec992a194" Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.592402 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557810-s8npb" Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.976237 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557804-txgbg"] Mar 14 06:10:04 crc kubenswrapper[4817]: I0314 06:10:04.987541 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557804-txgbg"] Mar 14 06:10:05 crc kubenswrapper[4817]: I0314 06:10:05.440492 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:10:05 crc kubenswrapper[4817]: I0314 06:10:05.440577 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:10:05 crc kubenswrapper[4817]: I0314 06:10:05.498373 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:10:05 crc kubenswrapper[4817]: I0314 06:10:05.643312 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:10:05 crc kubenswrapper[4817]: I0314 06:10:05.748817 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq2bb"] Mar 14 06:10:06 crc kubenswrapper[4817]: I0314 06:10:06.742544 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefc7b79-331d-4b08-977f-1b27e3f414eb" path="/var/lib/kubelet/pods/eefc7b79-331d-4b08-977f-1b27e3f414eb/volumes" Mar 14 06:10:07 crc kubenswrapper[4817]: I0314 06:10:07.617397 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kq2bb" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="registry-server" containerID="cri-o://60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9" gracePeriod=2 Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.626254 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.628657 4817 generic.go:334] "Generic (PLEG): container finished" podID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerID="60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9" exitCode=0 Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.628712 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq2bb" event={"ID":"a29a77fb-aac5-41e6-9cd3-1272b72bf54e","Type":"ContainerDied","Data":"60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9"} Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.628750 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kq2bb" event={"ID":"a29a77fb-aac5-41e6-9cd3-1272b72bf54e","Type":"ContainerDied","Data":"46da5a905b560dff988ed7a3447c390393757c516b69849dd21bfa968640b30f"} Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.628774 4817 scope.go:117] "RemoveContainer" containerID="60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.699178 4817 scope.go:117] "RemoveContainer" containerID="b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.721736 4817 scope.go:117] "RemoveContainer" containerID="bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.745488 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-utilities\") pod \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.745709 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw55l\" (UniqueName: \"kubernetes.io/projected/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-kube-api-access-bw55l\") pod \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.745822 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-catalog-content\") pod \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\" (UID: \"a29a77fb-aac5-41e6-9cd3-1272b72bf54e\") " Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.747309 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-utilities" (OuterVolumeSpecName: "utilities") pod "a29a77fb-aac5-41e6-9cd3-1272b72bf54e" (UID: "a29a77fb-aac5-41e6-9cd3-1272b72bf54e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.759499 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-kube-api-access-bw55l" (OuterVolumeSpecName: "kube-api-access-bw55l") pod "a29a77fb-aac5-41e6-9cd3-1272b72bf54e" (UID: "a29a77fb-aac5-41e6-9cd3-1272b72bf54e"). InnerVolumeSpecName "kube-api-access-bw55l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.766510 4817 scope.go:117] "RemoveContainer" containerID="60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9" Mar 14 06:10:08 crc kubenswrapper[4817]: E0314 06:10:08.767137 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9\": container with ID starting with 60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9 not found: ID does not exist" containerID="60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.767186 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9"} err="failed to get container status \"60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9\": rpc error: code = NotFound desc = could not find container \"60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9\": container with ID starting with 60e8a624911880dc47cd4bc59a28e8b5ac916bbbf4434f9c3aeeb1482a356fb9 not found: ID does not exist" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.767221 4817 scope.go:117] "RemoveContainer" containerID="b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b" Mar 14 06:10:08 crc kubenswrapper[4817]: E0314 06:10:08.767774 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b\": container with ID starting with b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b not found: ID does not exist" containerID="b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.767889 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b"} err="failed to get container status \"b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b\": rpc error: code = NotFound desc = could not find container \"b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b\": container with ID starting with b54b45de0642617a41fe1c1819a47e1ae543e33af0917be12311e2d9643dff7b not found: ID does not exist" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.768004 4817 scope.go:117] "RemoveContainer" containerID="bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4" Mar 14 06:10:08 crc kubenswrapper[4817]: E0314 06:10:08.768625 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4\": container with ID starting with bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4 not found: ID does not exist" containerID="bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.768653 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4"} err="failed to get container status \"bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4\": rpc error: code = NotFound desc = could not find container \"bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4\": container with ID starting with bef038d0dcd92cf9dcfebc2cc039a3acdd2fb29a4e818e00acc53e074017b0f4 not found: ID does not exist" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.775092 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a29a77fb-aac5-41e6-9cd3-1272b72bf54e" (UID: "a29a77fb-aac5-41e6-9cd3-1272b72bf54e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.848235 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw55l\" (UniqueName: \"kubernetes.io/projected/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-kube-api-access-bw55l\") on node \"crc\" DevicePath \"\"" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.848278 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:10:08 crc kubenswrapper[4817]: I0314 06:10:08.848290 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a29a77fb-aac5-41e6-9cd3-1272b72bf54e-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:10:09 crc kubenswrapper[4817]: I0314 06:10:09.640194 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kq2bb" Mar 14 06:10:09 crc kubenswrapper[4817]: I0314 06:10:09.688647 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq2bb"] Mar 14 06:10:09 crc kubenswrapper[4817]: I0314 06:10:09.701916 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kq2bb"] Mar 14 06:10:10 crc kubenswrapper[4817]: I0314 06:10:10.748132 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" path="/var/lib/kubelet/pods/a29a77fb-aac5-41e6-9cd3-1272b72bf54e/volumes" Mar 14 06:10:44 crc kubenswrapper[4817]: I0314 06:10:44.379478 4817 scope.go:117] "RemoveContainer" containerID="55c21c2681875d9e52bb111ec8a48e9d83ff964d7d7e4faa459badb0ce7b27ed" Mar 14 06:10:44 crc kubenswrapper[4817]: I0314 06:10:44.445403 4817 scope.go:117] "RemoveContainer" containerID="42e37c64fab633d0842648d0a8d35d9d86004550e1c2e89a77ed1fa2b731dba6" Mar 14 06:10:44 crc kubenswrapper[4817]: I0314 06:10:44.482270 4817 scope.go:117] "RemoveContainer" containerID="93b989742fae0f4354f257e4eb8b257dfaff84b4f4ca6273236fdb675145ed85" Mar 14 06:11:08 crc kubenswrapper[4817]: I0314 06:11:08.256205 4817 generic.go:334] "Generic (PLEG): container finished" podID="03575d81-89e3-4d1a-a27a-5aad81319453" containerID="e450d547fa59d24bfe15e0bcc617576ccb4b180e549f5ba74d33796465653a3f" exitCode=0 Mar 14 06:11:08 crc kubenswrapper[4817]: I0314 06:11:08.256326 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" event={"ID":"03575d81-89e3-4d1a-a27a-5aad81319453","Type":"ContainerDied","Data":"e450d547fa59d24bfe15e0bcc617576ccb4b180e549f5ba74d33796465653a3f"} Mar 14 06:11:08 crc kubenswrapper[4817]: I0314 06:11:08.565617 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:11:08 crc kubenswrapper[4817]: I0314 06:11:08.565706 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.739229 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.920265 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ceph\") pod \"03575d81-89e3-4d1a-a27a-5aad81319453\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.920352 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-bootstrap-combined-ca-bundle\") pod \"03575d81-89e3-4d1a-a27a-5aad81319453\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.920469 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-inventory\") pod \"03575d81-89e3-4d1a-a27a-5aad81319453\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.920576 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksdl7\" (UniqueName: \"kubernetes.io/projected/03575d81-89e3-4d1a-a27a-5aad81319453-kube-api-access-ksdl7\") pod \"03575d81-89e3-4d1a-a27a-5aad81319453\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.920664 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ssh-key-openstack-edpm-ipam\") pod \"03575d81-89e3-4d1a-a27a-5aad81319453\" (UID: \"03575d81-89e3-4d1a-a27a-5aad81319453\") " Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.927336 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03575d81-89e3-4d1a-a27a-5aad81319453-kube-api-access-ksdl7" (OuterVolumeSpecName: "kube-api-access-ksdl7") pod "03575d81-89e3-4d1a-a27a-5aad81319453" (UID: "03575d81-89e3-4d1a-a27a-5aad81319453"). InnerVolumeSpecName "kube-api-access-ksdl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.928063 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ceph" (OuterVolumeSpecName: "ceph") pod "03575d81-89e3-4d1a-a27a-5aad81319453" (UID: "03575d81-89e3-4d1a-a27a-5aad81319453"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.929068 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03575d81-89e3-4d1a-a27a-5aad81319453" (UID: "03575d81-89e3-4d1a-a27a-5aad81319453"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.953403 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-inventory" (OuterVolumeSpecName: "inventory") pod "03575d81-89e3-4d1a-a27a-5aad81319453" (UID: "03575d81-89e3-4d1a-a27a-5aad81319453"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:09 crc kubenswrapper[4817]: I0314 06:11:09.970066 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03575d81-89e3-4d1a-a27a-5aad81319453" (UID: "03575d81-89e3-4d1a-a27a-5aad81319453"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.024640 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.024681 4817 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.024697 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.024711 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksdl7\" (UniqueName: \"kubernetes.io/projected/03575d81-89e3-4d1a-a27a-5aad81319453-kube-api-access-ksdl7\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.024723 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03575d81-89e3-4d1a-a27a-5aad81319453-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.279195 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" event={"ID":"03575d81-89e3-4d1a-a27a-5aad81319453","Type":"ContainerDied","Data":"c8eba5a0e973fe36dcf4872ef9ad9304d2cb75671fd13e6dfbec2d1588b1b5d2"} Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.279248 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8eba5a0e973fe36dcf4872ef9ad9304d2cb75671fd13e6dfbec2d1588b1b5d2" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.279291 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.381054 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk"] Mar 14 06:11:10 crc kubenswrapper[4817]: E0314 06:11:10.381637 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03575d81-89e3-4d1a-a27a-5aad81319453" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.381667 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="03575d81-89e3-4d1a-a27a-5aad81319453" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:10 crc kubenswrapper[4817]: E0314 06:11:10.381688 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="registry-server" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.381699 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="registry-server" Mar 14 06:11:10 crc kubenswrapper[4817]: E0314 06:11:10.381731 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6729a15-1d9c-4f1f-961a-07b401796464" containerName="oc" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.381741 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6729a15-1d9c-4f1f-961a-07b401796464" containerName="oc" Mar 14 06:11:10 crc kubenswrapper[4817]: E0314 06:11:10.381759 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="extract-content" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.381768 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="extract-content" Mar 14 06:11:10 crc kubenswrapper[4817]: E0314 06:11:10.381793 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="extract-utilities" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.381803 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="extract-utilities" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.382024 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="03575d81-89e3-4d1a-a27a-5aad81319453" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.382048 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6729a15-1d9c-4f1f-961a-07b401796464" containerName="oc" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.382061 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29a77fb-aac5-41e6-9cd3-1272b72bf54e" containerName="registry-server" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.383211 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.387689 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.391532 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.392486 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.393391 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.393818 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.404615 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk"] Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.535451 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwwq8\" (UniqueName: \"kubernetes.io/projected/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-kube-api-access-wwwq8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.535512 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.536156 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.536497 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.638858 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwwq8\" (UniqueName: \"kubernetes.io/projected/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-kube-api-access-wwwq8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.638961 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.639103 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.639170 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.645478 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.646071 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.651990 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.656316 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwwq8\" (UniqueName: \"kubernetes.io/projected/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-kube-api-access-wwwq8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ndntk\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:10 crc kubenswrapper[4817]: I0314 06:11:10.726438 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:11 crc kubenswrapper[4817]: I0314 06:11:11.319600 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk"] Mar 14 06:11:12 crc kubenswrapper[4817]: I0314 06:11:12.303395 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" event={"ID":"7c8f94cd-c90d-40df-af0a-88ddf4730cbc","Type":"ContainerStarted","Data":"468bb02f4757ee7e28135314b984fc87123edf64f89c05332342a25cb4278916"} Mar 14 06:11:12 crc kubenswrapper[4817]: I0314 06:11:12.304169 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" event={"ID":"7c8f94cd-c90d-40df-af0a-88ddf4730cbc","Type":"ContainerStarted","Data":"9fe10a899d422b106961cc4aec4e20bebd788af9b3c3c2962845ac6eebdd43d1"} Mar 14 06:11:12 crc kubenswrapper[4817]: I0314 06:11:12.340559 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" podStartSLOduration=1.8330483869999998 podStartE2EDuration="2.340528162s" podCreationTimestamp="2026-03-14 06:11:10 +0000 UTC" firstStartedPulling="2026-03-14 06:11:11.327139096 +0000 UTC m=+2325.365399842" lastFinishedPulling="2026-03-14 06:11:11.834618871 +0000 UTC m=+2325.872879617" observedRunningTime="2026-03-14 06:11:12.332171484 +0000 UTC m=+2326.370432250" watchObservedRunningTime="2026-03-14 06:11:12.340528162 +0000 UTC m=+2326.378788908" Mar 14 06:11:37 crc kubenswrapper[4817]: I0314 06:11:37.543607 4817 generic.go:334] "Generic (PLEG): container finished" podID="7c8f94cd-c90d-40df-af0a-88ddf4730cbc" containerID="468bb02f4757ee7e28135314b984fc87123edf64f89c05332342a25cb4278916" exitCode=0 Mar 14 06:11:37 crc kubenswrapper[4817]: I0314 06:11:37.543687 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" event={"ID":"7c8f94cd-c90d-40df-af0a-88ddf4730cbc","Type":"ContainerDied","Data":"468bb02f4757ee7e28135314b984fc87123edf64f89c05332342a25cb4278916"} Mar 14 06:11:38 crc kubenswrapper[4817]: I0314 06:11:38.565354 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:11:38 crc kubenswrapper[4817]: I0314 06:11:38.565712 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.054018 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.176783 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ceph\") pod \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.177127 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-inventory\") pod \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.177198 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwwq8\" (UniqueName: \"kubernetes.io/projected/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-kube-api-access-wwwq8\") pod \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.177270 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ssh-key-openstack-edpm-ipam\") pod \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\" (UID: \"7c8f94cd-c90d-40df-af0a-88ddf4730cbc\") " Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.184610 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-kube-api-access-wwwq8" (OuterVolumeSpecName: "kube-api-access-wwwq8") pod "7c8f94cd-c90d-40df-af0a-88ddf4730cbc" (UID: "7c8f94cd-c90d-40df-af0a-88ddf4730cbc"). InnerVolumeSpecName "kube-api-access-wwwq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.185535 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ceph" (OuterVolumeSpecName: "ceph") pod "7c8f94cd-c90d-40df-af0a-88ddf4730cbc" (UID: "7c8f94cd-c90d-40df-af0a-88ddf4730cbc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.225198 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c8f94cd-c90d-40df-af0a-88ddf4730cbc" (UID: "7c8f94cd-c90d-40df-af0a-88ddf4730cbc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.236932 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-inventory" (OuterVolumeSpecName: "inventory") pod "7c8f94cd-c90d-40df-af0a-88ddf4730cbc" (UID: "7c8f94cd-c90d-40df-af0a-88ddf4730cbc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.279389 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.279434 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwwq8\" (UniqueName: \"kubernetes.io/projected/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-kube-api-access-wwwq8\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.279448 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.279459 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7c8f94cd-c90d-40df-af0a-88ddf4730cbc-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.563486 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" event={"ID":"7c8f94cd-c90d-40df-af0a-88ddf4730cbc","Type":"ContainerDied","Data":"9fe10a899d422b106961cc4aec4e20bebd788af9b3c3c2962845ac6eebdd43d1"} Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.563532 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe10a899d422b106961cc4aec4e20bebd788af9b3c3c2962845ac6eebdd43d1" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.563598 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ndntk" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.647738 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj"] Mar 14 06:11:39 crc kubenswrapper[4817]: E0314 06:11:39.648132 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8f94cd-c90d-40df-af0a-88ddf4730cbc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.648146 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8f94cd-c90d-40df-af0a-88ddf4730cbc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.648350 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8f94cd-c90d-40df-af0a-88ddf4730cbc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.648967 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.651475 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.651607 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.651610 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.652004 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.652127 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.662518 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj"] Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.794027 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.794087 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7n6\" (UniqueName: \"kubernetes.io/projected/b73850a9-8701-4b80-8944-a762eaa7cf5e-kube-api-access-rj7n6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.794478 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.794542 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.895907 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.895955 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.896957 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.897145 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj7n6\" (UniqueName: \"kubernetes.io/projected/b73850a9-8701-4b80-8944-a762eaa7cf5e-kube-api-access-rj7n6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.901063 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.901117 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.901551 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.916308 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj7n6\" (UniqueName: \"kubernetes.io/projected/b73850a9-8701-4b80-8944-a762eaa7cf5e-kube-api-access-rj7n6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:39 crc kubenswrapper[4817]: I0314 06:11:39.969042 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:40 crc kubenswrapper[4817]: I0314 06:11:40.489698 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj"] Mar 14 06:11:40 crc kubenswrapper[4817]: I0314 06:11:40.579605 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" event={"ID":"b73850a9-8701-4b80-8944-a762eaa7cf5e","Type":"ContainerStarted","Data":"31400dba8741232fa07e039d35c5b55d92ba52ac6b6dc07965d5a35091c5be95"} Mar 14 06:11:42 crc kubenswrapper[4817]: I0314 06:11:42.595944 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" event={"ID":"b73850a9-8701-4b80-8944-a762eaa7cf5e","Type":"ContainerStarted","Data":"4475f00578c945afef4930305219f5a88a121c36953d4171c5ec12cf5ace52b2"} Mar 14 06:11:42 crc kubenswrapper[4817]: I0314 06:11:42.622629 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" podStartSLOduration=2.658378092 podStartE2EDuration="3.62260826s" podCreationTimestamp="2026-03-14 06:11:39 +0000 UTC" firstStartedPulling="2026-03-14 06:11:40.493600968 +0000 UTC m=+2354.531861704" lastFinishedPulling="2026-03-14 06:11:41.457831126 +0000 UTC m=+2355.496091872" observedRunningTime="2026-03-14 06:11:42.615489738 +0000 UTC m=+2356.653750504" watchObservedRunningTime="2026-03-14 06:11:42.62260826 +0000 UTC m=+2356.660869006" Mar 14 06:11:44 crc kubenswrapper[4817]: I0314 06:11:44.643139 4817 scope.go:117] "RemoveContainer" containerID="cb4d04b7c8af655ebcade33ba926ed572a30db2ebe145cd268faec4f1b2be5b9" Mar 14 06:11:44 crc kubenswrapper[4817]: I0314 06:11:44.679457 4817 scope.go:117] "RemoveContainer" containerID="a2564c2893a8026d3bc3312d9f73cd2b91bb3f013b15cc31cbfa8bbd17fb18f5" Mar 14 06:11:46 crc kubenswrapper[4817]: I0314 06:11:46.633049 4817 generic.go:334] "Generic (PLEG): container finished" podID="b73850a9-8701-4b80-8944-a762eaa7cf5e" containerID="4475f00578c945afef4930305219f5a88a121c36953d4171c5ec12cf5ace52b2" exitCode=0 Mar 14 06:11:46 crc kubenswrapper[4817]: I0314 06:11:46.633099 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" event={"ID":"b73850a9-8701-4b80-8944-a762eaa7cf5e","Type":"ContainerDied","Data":"4475f00578c945afef4930305219f5a88a121c36953d4171c5ec12cf5ace52b2"} Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.064142 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.161360 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-inventory\") pod \"b73850a9-8701-4b80-8944-a762eaa7cf5e\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.161570 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj7n6\" (UniqueName: \"kubernetes.io/projected/b73850a9-8701-4b80-8944-a762eaa7cf5e-kube-api-access-rj7n6\") pod \"b73850a9-8701-4b80-8944-a762eaa7cf5e\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.161601 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ceph\") pod \"b73850a9-8701-4b80-8944-a762eaa7cf5e\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.161623 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ssh-key-openstack-edpm-ipam\") pod \"b73850a9-8701-4b80-8944-a762eaa7cf5e\" (UID: \"b73850a9-8701-4b80-8944-a762eaa7cf5e\") " Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.167619 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73850a9-8701-4b80-8944-a762eaa7cf5e-kube-api-access-rj7n6" (OuterVolumeSpecName: "kube-api-access-rj7n6") pod "b73850a9-8701-4b80-8944-a762eaa7cf5e" (UID: "b73850a9-8701-4b80-8944-a762eaa7cf5e"). InnerVolumeSpecName "kube-api-access-rj7n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.178033 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ceph" (OuterVolumeSpecName: "ceph") pod "b73850a9-8701-4b80-8944-a762eaa7cf5e" (UID: "b73850a9-8701-4b80-8944-a762eaa7cf5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.206150 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-inventory" (OuterVolumeSpecName: "inventory") pod "b73850a9-8701-4b80-8944-a762eaa7cf5e" (UID: "b73850a9-8701-4b80-8944-a762eaa7cf5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.215976 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b73850a9-8701-4b80-8944-a762eaa7cf5e" (UID: "b73850a9-8701-4b80-8944-a762eaa7cf5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.264363 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj7n6\" (UniqueName: \"kubernetes.io/projected/b73850a9-8701-4b80-8944-a762eaa7cf5e-kube-api-access-rj7n6\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.264400 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.264410 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.264418 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73850a9-8701-4b80-8944-a762eaa7cf5e-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.658298 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" event={"ID":"b73850a9-8701-4b80-8944-a762eaa7cf5e","Type":"ContainerDied","Data":"31400dba8741232fa07e039d35c5b55d92ba52ac6b6dc07965d5a35091c5be95"} Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.658646 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31400dba8741232fa07e039d35c5b55d92ba52ac6b6dc07965d5a35091c5be95" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.658517 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.773585 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t"] Mar 14 06:11:48 crc kubenswrapper[4817]: E0314 06:11:48.774055 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73850a9-8701-4b80-8944-a762eaa7cf5e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.774103 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73850a9-8701-4b80-8944-a762eaa7cf5e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.774352 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73850a9-8701-4b80-8944-a762eaa7cf5e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.775386 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.777623 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.779386 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.779687 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.779948 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.781148 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.789114 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t"] Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.874208 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftltv\" (UniqueName: \"kubernetes.io/projected/d6d2380c-071b-413a-a854-7b25ae09401a-kube-api-access-ftltv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.874265 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.874320 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.874406 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.977013 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftltv\" (UniqueName: \"kubernetes.io/projected/d6d2380c-071b-413a-a854-7b25ae09401a-kube-api-access-ftltv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.977671 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.977788 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.977942 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.983466 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.986712 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.991436 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:48 crc kubenswrapper[4817]: I0314 06:11:48.995577 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftltv\" (UniqueName: \"kubernetes.io/projected/d6d2380c-071b-413a-a854-7b25ae09401a-kube-api-access-ftltv\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pr57t\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:49 crc kubenswrapper[4817]: I0314 06:11:49.110400 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:11:49 crc kubenswrapper[4817]: I0314 06:11:49.690310 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t"] Mar 14 06:11:50 crc kubenswrapper[4817]: I0314 06:11:50.679267 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" event={"ID":"d6d2380c-071b-413a-a854-7b25ae09401a","Type":"ContainerStarted","Data":"033d6bf462aeb8d0c96963154a0d88a0e9f539175b1f9f58a6d22706c5b4318f"} Mar 14 06:11:51 crc kubenswrapper[4817]: I0314 06:11:51.699837 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" event={"ID":"d6d2380c-071b-413a-a854-7b25ae09401a","Type":"ContainerStarted","Data":"83e5103d1e71992c2c17e6ee0ee57b9f8a22b6971d864e94a22059624b264e03"} Mar 14 06:11:51 crc kubenswrapper[4817]: I0314 06:11:51.731644 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" podStartSLOduration=2.662168044 podStartE2EDuration="3.731621297s" podCreationTimestamp="2026-03-14 06:11:48 +0000 UTC" firstStartedPulling="2026-03-14 06:11:49.699406537 +0000 UTC m=+2363.737667283" lastFinishedPulling="2026-03-14 06:11:50.76885979 +0000 UTC m=+2364.807120536" observedRunningTime="2026-03-14 06:11:51.725943475 +0000 UTC m=+2365.764204221" watchObservedRunningTime="2026-03-14 06:11:51.731621297 +0000 UTC m=+2365.769882043" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.138243 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557812-dj9s9"] Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.140278 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.153018 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557812-dj9s9"] Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.179169 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.179322 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.179193 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.205770 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqvk4\" (UniqueName: \"kubernetes.io/projected/bf497723-2086-4244-a00d-636a8e10b54c-kube-api-access-nqvk4\") pod \"auto-csr-approver-29557812-dj9s9\" (UID: \"bf497723-2086-4244-a00d-636a8e10b54c\") " pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.307655 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqvk4\" (UniqueName: \"kubernetes.io/projected/bf497723-2086-4244-a00d-636a8e10b54c-kube-api-access-nqvk4\") pod \"auto-csr-approver-29557812-dj9s9\" (UID: \"bf497723-2086-4244-a00d-636a8e10b54c\") " pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.337732 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqvk4\" (UniqueName: \"kubernetes.io/projected/bf497723-2086-4244-a00d-636a8e10b54c-kube-api-access-nqvk4\") pod \"auto-csr-approver-29557812-dj9s9\" (UID: \"bf497723-2086-4244-a00d-636a8e10b54c\") " pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.500227 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:00 crc kubenswrapper[4817]: I0314 06:12:00.953076 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557812-dj9s9"] Mar 14 06:12:01 crc kubenswrapper[4817]: I0314 06:12:01.783676 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" event={"ID":"bf497723-2086-4244-a00d-636a8e10b54c","Type":"ContainerStarted","Data":"736bb61823dd6b36a125b5ec5db357146b58020576079c2cedde46d13cc2d9c1"} Mar 14 06:12:02 crc kubenswrapper[4817]: I0314 06:12:02.793717 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" event={"ID":"bf497723-2086-4244-a00d-636a8e10b54c","Type":"ContainerStarted","Data":"ed78141dfd235e323d6cdae005a1ef3f9b3f34758caaf7ae3f4a6283b7333090"} Mar 14 06:12:02 crc kubenswrapper[4817]: I0314 06:12:02.814093 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" podStartSLOduration=1.542033171 podStartE2EDuration="2.814076539s" podCreationTimestamp="2026-03-14 06:12:00 +0000 UTC" firstStartedPulling="2026-03-14 06:12:00.95442314 +0000 UTC m=+2374.992683896" lastFinishedPulling="2026-03-14 06:12:02.226466518 +0000 UTC m=+2376.264727264" observedRunningTime="2026-03-14 06:12:02.813487102 +0000 UTC m=+2376.851747848" watchObservedRunningTime="2026-03-14 06:12:02.814076539 +0000 UTC m=+2376.852337285" Mar 14 06:12:03 crc kubenswrapper[4817]: I0314 06:12:03.808551 4817 generic.go:334] "Generic (PLEG): container finished" podID="bf497723-2086-4244-a00d-636a8e10b54c" containerID="ed78141dfd235e323d6cdae005a1ef3f9b3f34758caaf7ae3f4a6283b7333090" exitCode=0 Mar 14 06:12:03 crc kubenswrapper[4817]: I0314 06:12:03.808607 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" event={"ID":"bf497723-2086-4244-a00d-636a8e10b54c","Type":"ContainerDied","Data":"ed78141dfd235e323d6cdae005a1ef3f9b3f34758caaf7ae3f4a6283b7333090"} Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.155612 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.312616 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqvk4\" (UniqueName: \"kubernetes.io/projected/bf497723-2086-4244-a00d-636a8e10b54c-kube-api-access-nqvk4\") pod \"bf497723-2086-4244-a00d-636a8e10b54c\" (UID: \"bf497723-2086-4244-a00d-636a8e10b54c\") " Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.324969 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf497723-2086-4244-a00d-636a8e10b54c-kube-api-access-nqvk4" (OuterVolumeSpecName: "kube-api-access-nqvk4") pod "bf497723-2086-4244-a00d-636a8e10b54c" (UID: "bf497723-2086-4244-a00d-636a8e10b54c"). InnerVolumeSpecName "kube-api-access-nqvk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.413740 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqvk4\" (UniqueName: \"kubernetes.io/projected/bf497723-2086-4244-a00d-636a8e10b54c-kube-api-access-nqvk4\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.827436 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" event={"ID":"bf497723-2086-4244-a00d-636a8e10b54c","Type":"ContainerDied","Data":"736bb61823dd6b36a125b5ec5db357146b58020576079c2cedde46d13cc2d9c1"} Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.827500 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="736bb61823dd6b36a125b5ec5db357146b58020576079c2cedde46d13cc2d9c1" Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.827498 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557812-dj9s9" Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.908847 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557806-fxjzp"] Mar 14 06:12:05 crc kubenswrapper[4817]: I0314 06:12:05.921302 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557806-fxjzp"] Mar 14 06:12:06 crc kubenswrapper[4817]: I0314 06:12:06.757430 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d307f5-0c34-4716-87f8-78cadbe917ca" path="/var/lib/kubelet/pods/16d307f5-0c34-4716-87f8-78cadbe917ca/volumes" Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.565722 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.566357 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.566446 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.567745 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.567943 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" gracePeriod=600 Mar 14 06:12:08 crc kubenswrapper[4817]: E0314 06:12:08.728733 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.865080 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" exitCode=0 Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.865130 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18"} Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.865167 4817 scope.go:117] "RemoveContainer" containerID="3dab7efbef397f9b73d3e4fc38d0c4c45ca0a8ca618353f78100fd0231d68820" Mar 14 06:12:08 crc kubenswrapper[4817]: I0314 06:12:08.866475 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:12:08 crc kubenswrapper[4817]: E0314 06:12:08.867520 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:12:23 crc kubenswrapper[4817]: I0314 06:12:23.731897 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:12:23 crc kubenswrapper[4817]: E0314 06:12:23.733081 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:12:26 crc kubenswrapper[4817]: I0314 06:12:26.053842 4817 generic.go:334] "Generic (PLEG): container finished" podID="d6d2380c-071b-413a-a854-7b25ae09401a" containerID="83e5103d1e71992c2c17e6ee0ee57b9f8a22b6971d864e94a22059624b264e03" exitCode=0 Mar 14 06:12:26 crc kubenswrapper[4817]: I0314 06:12:26.053981 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" event={"ID":"d6d2380c-071b-413a-a854-7b25ae09401a","Type":"ContainerDied","Data":"83e5103d1e71992c2c17e6ee0ee57b9f8a22b6971d864e94a22059624b264e03"} Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.481014 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.617554 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-inventory\") pod \"d6d2380c-071b-413a-a854-7b25ae09401a\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.617663 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ssh-key-openstack-edpm-ipam\") pod \"d6d2380c-071b-413a-a854-7b25ae09401a\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.617773 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftltv\" (UniqueName: \"kubernetes.io/projected/d6d2380c-071b-413a-a854-7b25ae09401a-kube-api-access-ftltv\") pod \"d6d2380c-071b-413a-a854-7b25ae09401a\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.617798 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ceph\") pod \"d6d2380c-071b-413a-a854-7b25ae09401a\" (UID: \"d6d2380c-071b-413a-a854-7b25ae09401a\") " Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.633202 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ceph" (OuterVolumeSpecName: "ceph") pod "d6d2380c-071b-413a-a854-7b25ae09401a" (UID: "d6d2380c-071b-413a-a854-7b25ae09401a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.633262 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d2380c-071b-413a-a854-7b25ae09401a-kube-api-access-ftltv" (OuterVolumeSpecName: "kube-api-access-ftltv") pod "d6d2380c-071b-413a-a854-7b25ae09401a" (UID: "d6d2380c-071b-413a-a854-7b25ae09401a"). InnerVolumeSpecName "kube-api-access-ftltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.646670 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-inventory" (OuterVolumeSpecName: "inventory") pod "d6d2380c-071b-413a-a854-7b25ae09401a" (UID: "d6d2380c-071b-413a-a854-7b25ae09401a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.648154 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6d2380c-071b-413a-a854-7b25ae09401a" (UID: "d6d2380c-071b-413a-a854-7b25ae09401a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.719931 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.719956 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.719966 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftltv\" (UniqueName: \"kubernetes.io/projected/d6d2380c-071b-413a-a854-7b25ae09401a-kube-api-access-ftltv\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:27 crc kubenswrapper[4817]: I0314 06:12:27.719975 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d6d2380c-071b-413a-a854-7b25ae09401a-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.069517 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" event={"ID":"d6d2380c-071b-413a-a854-7b25ae09401a","Type":"ContainerDied","Data":"033d6bf462aeb8d0c96963154a0d88a0e9f539175b1f9f58a6d22706c5b4318f"} Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.069563 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033d6bf462aeb8d0c96963154a0d88a0e9f539175b1f9f58a6d22706c5b4318f" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.069579 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pr57t" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.208722 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn"] Mar 14 06:12:28 crc kubenswrapper[4817]: E0314 06:12:28.216478 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf497723-2086-4244-a00d-636a8e10b54c" containerName="oc" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.216516 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf497723-2086-4244-a00d-636a8e10b54c" containerName="oc" Mar 14 06:12:28 crc kubenswrapper[4817]: E0314 06:12:28.216529 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d2380c-071b-413a-a854-7b25ae09401a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.216543 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d2380c-071b-413a-a854-7b25ae09401a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.216765 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf497723-2086-4244-a00d-636a8e10b54c" containerName="oc" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.216782 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d2380c-071b-413a-a854-7b25ae09401a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.219043 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.222292 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn"] Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.229706 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.230071 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.230206 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.230839 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.230865 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.333418 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzg5w\" (UniqueName: \"kubernetes.io/projected/bfe316ac-01fd-4838-b92a-7899469d769f-kube-api-access-dzg5w\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.333784 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.334019 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.334055 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.438137 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzg5w\" (UniqueName: \"kubernetes.io/projected/bfe316ac-01fd-4838-b92a-7899469d769f-kube-api-access-dzg5w\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.438295 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.438389 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.438424 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.442089 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.442195 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.452503 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.458597 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzg5w\" (UniqueName: \"kubernetes.io/projected/bfe316ac-01fd-4838-b92a-7899469d769f-kube-api-access-dzg5w\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:28 crc kubenswrapper[4817]: I0314 06:12:28.546804 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:29 crc kubenswrapper[4817]: I0314 06:12:29.062123 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn"] Mar 14 06:12:29 crc kubenswrapper[4817]: W0314 06:12:29.063376 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfe316ac_01fd_4838_b92a_7899469d769f.slice/crio-4d2a7777e1206f69d1ebb3943792f7107ad5c8f237657427bde1441b1b6dfb64 WatchSource:0}: Error finding container 4d2a7777e1206f69d1ebb3943792f7107ad5c8f237657427bde1441b1b6dfb64: Status 404 returned error can't find the container with id 4d2a7777e1206f69d1ebb3943792f7107ad5c8f237657427bde1441b1b6dfb64 Mar 14 06:12:29 crc kubenswrapper[4817]: I0314 06:12:29.080353 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" event={"ID":"bfe316ac-01fd-4838-b92a-7899469d769f","Type":"ContainerStarted","Data":"4d2a7777e1206f69d1ebb3943792f7107ad5c8f237657427bde1441b1b6dfb64"} Mar 14 06:12:30 crc kubenswrapper[4817]: I0314 06:12:30.090987 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" event={"ID":"bfe316ac-01fd-4838-b92a-7899469d769f","Type":"ContainerStarted","Data":"80a94cddf5c76ca83df8c1d9223694f46dd1c106ef9d09660c69dc07b33c2638"} Mar 14 06:12:30 crc kubenswrapper[4817]: I0314 06:12:30.124572 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" podStartSLOduration=1.6207980659999999 podStartE2EDuration="2.124548811s" podCreationTimestamp="2026-03-14 06:12:28 +0000 UTC" firstStartedPulling="2026-03-14 06:12:29.066493693 +0000 UTC m=+2403.104754439" lastFinishedPulling="2026-03-14 06:12:29.570244438 +0000 UTC m=+2403.608505184" observedRunningTime="2026-03-14 06:12:30.108279008 +0000 UTC m=+2404.146539764" watchObservedRunningTime="2026-03-14 06:12:30.124548811 +0000 UTC m=+2404.162809577" Mar 14 06:12:34 crc kubenswrapper[4817]: I0314 06:12:34.124515 4817 generic.go:334] "Generic (PLEG): container finished" podID="bfe316ac-01fd-4838-b92a-7899469d769f" containerID="80a94cddf5c76ca83df8c1d9223694f46dd1c106ef9d09660c69dc07b33c2638" exitCode=0 Mar 14 06:12:34 crc kubenswrapper[4817]: I0314 06:12:34.124648 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" event={"ID":"bfe316ac-01fd-4838-b92a-7899469d769f","Type":"ContainerDied","Data":"80a94cddf5c76ca83df8c1d9223694f46dd1c106ef9d09660c69dc07b33c2638"} Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.555259 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.705845 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ssh-key-openstack-edpm-ipam\") pod \"bfe316ac-01fd-4838-b92a-7899469d769f\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.707107 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-inventory\") pod \"bfe316ac-01fd-4838-b92a-7899469d769f\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.707357 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzg5w\" (UniqueName: \"kubernetes.io/projected/bfe316ac-01fd-4838-b92a-7899469d769f-kube-api-access-dzg5w\") pod \"bfe316ac-01fd-4838-b92a-7899469d769f\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.708196 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ceph\") pod \"bfe316ac-01fd-4838-b92a-7899469d769f\" (UID: \"bfe316ac-01fd-4838-b92a-7899469d769f\") " Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.713298 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe316ac-01fd-4838-b92a-7899469d769f-kube-api-access-dzg5w" (OuterVolumeSpecName: "kube-api-access-dzg5w") pod "bfe316ac-01fd-4838-b92a-7899469d769f" (UID: "bfe316ac-01fd-4838-b92a-7899469d769f"). InnerVolumeSpecName "kube-api-access-dzg5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.720060 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ceph" (OuterVolumeSpecName: "ceph") pod "bfe316ac-01fd-4838-b92a-7899469d769f" (UID: "bfe316ac-01fd-4838-b92a-7899469d769f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.736338 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-inventory" (OuterVolumeSpecName: "inventory") pod "bfe316ac-01fd-4838-b92a-7899469d769f" (UID: "bfe316ac-01fd-4838-b92a-7899469d769f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.758481 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bfe316ac-01fd-4838-b92a-7899469d769f" (UID: "bfe316ac-01fd-4838-b92a-7899469d769f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.814135 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.814187 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.814199 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzg5w\" (UniqueName: \"kubernetes.io/projected/bfe316ac-01fd-4838-b92a-7899469d769f-kube-api-access-dzg5w\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:35 crc kubenswrapper[4817]: I0314 06:12:35.814210 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bfe316ac-01fd-4838-b92a-7899469d769f-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.146007 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" event={"ID":"bfe316ac-01fd-4838-b92a-7899469d769f","Type":"ContainerDied","Data":"4d2a7777e1206f69d1ebb3943792f7107ad5c8f237657427bde1441b1b6dfb64"} Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.146727 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2a7777e1206f69d1ebb3943792f7107ad5c8f237657427bde1441b1b6dfb64" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.146105 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.283288 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw"] Mar 14 06:12:36 crc kubenswrapper[4817]: E0314 06:12:36.283830 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe316ac-01fd-4838-b92a-7899469d769f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.283859 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe316ac-01fd-4838-b92a-7899469d769f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.284074 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe316ac-01fd-4838-b92a-7899469d769f" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.284710 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.286969 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.287338 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.287846 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.288989 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.294396 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.297324 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw"] Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.430012 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjtvt\" (UniqueName: \"kubernetes.io/projected/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-kube-api-access-rjtvt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.431040 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.431264 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.431594 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.533390 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjtvt\" (UniqueName: \"kubernetes.io/projected/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-kube-api-access-rjtvt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.533767 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.533973 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.534149 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.538557 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.540472 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.541753 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.553741 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjtvt\" (UniqueName: \"kubernetes.io/projected/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-kube-api-access-rjtvt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:36 crc kubenswrapper[4817]: I0314 06:12:36.604922 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:12:38 crc kubenswrapper[4817]: I0314 06:12:38.461113 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw"] Mar 14 06:12:38 crc kubenswrapper[4817]: I0314 06:12:38.732602 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:12:38 crc kubenswrapper[4817]: E0314 06:12:38.732867 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:12:39 crc kubenswrapper[4817]: I0314 06:12:39.386230 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" event={"ID":"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a","Type":"ContainerStarted","Data":"02a25e5137624e1b1bf2375b2861df8ac7f96effa12aabf2a65b37b409ec237d"} Mar 14 06:12:40 crc kubenswrapper[4817]: I0314 06:12:40.400867 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" event={"ID":"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a","Type":"ContainerStarted","Data":"c1211e81c0f3075c8208e835107f5e721899f027fc7fb4d883cbd54bd31f079c"} Mar 14 06:12:40 crc kubenswrapper[4817]: I0314 06:12:40.425607 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" podStartSLOduration=3.706400702 podStartE2EDuration="4.425585635s" podCreationTimestamp="2026-03-14 06:12:36 +0000 UTC" firstStartedPulling="2026-03-14 06:12:38.489741301 +0000 UTC m=+2412.528002047" lastFinishedPulling="2026-03-14 06:12:39.208926234 +0000 UTC m=+2413.247186980" observedRunningTime="2026-03-14 06:12:40.421621172 +0000 UTC m=+2414.459881918" watchObservedRunningTime="2026-03-14 06:12:40.425585635 +0000 UTC m=+2414.463846381" Mar 14 06:12:44 crc kubenswrapper[4817]: I0314 06:12:44.784874 4817 scope.go:117] "RemoveContainer" containerID="90eb4f023096139ea1173e33b952181702b6099b4941f8e637a2a526ce772315" Mar 14 06:12:52 crc kubenswrapper[4817]: I0314 06:12:52.732543 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:12:52 crc kubenswrapper[4817]: E0314 06:12:52.733704 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:13:03 crc kubenswrapper[4817]: I0314 06:13:03.732755 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:13:03 crc kubenswrapper[4817]: E0314 06:13:03.733749 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:13:17 crc kubenswrapper[4817]: I0314 06:13:17.732727 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:13:17 crc kubenswrapper[4817]: E0314 06:13:17.733674 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:13:20 crc kubenswrapper[4817]: I0314 06:13:20.800434 4817 generic.go:334] "Generic (PLEG): container finished" podID="c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" containerID="c1211e81c0f3075c8208e835107f5e721899f027fc7fb4d883cbd54bd31f079c" exitCode=0 Mar 14 06:13:20 crc kubenswrapper[4817]: I0314 06:13:20.800532 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" event={"ID":"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a","Type":"ContainerDied","Data":"c1211e81c0f3075c8208e835107f5e721899f027fc7fb4d883cbd54bd31f079c"} Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.267782 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.432160 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjtvt\" (UniqueName: \"kubernetes.io/projected/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-kube-api-access-rjtvt\") pod \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.432512 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ssh-key-openstack-edpm-ipam\") pod \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.432577 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ceph\") pod \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.432737 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-inventory\") pod \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\" (UID: \"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a\") " Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.439217 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ceph" (OuterVolumeSpecName: "ceph") pod "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" (UID: "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.444384 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-kube-api-access-rjtvt" (OuterVolumeSpecName: "kube-api-access-rjtvt") pod "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" (UID: "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a"). InnerVolumeSpecName "kube-api-access-rjtvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.461025 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" (UID: "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.468675 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-inventory" (OuterVolumeSpecName: "inventory") pod "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" (UID: "c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.535282 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjtvt\" (UniqueName: \"kubernetes.io/projected/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-kube-api-access-rjtvt\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.535310 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.535319 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.535328 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.834052 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" event={"ID":"c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a","Type":"ContainerDied","Data":"02a25e5137624e1b1bf2375b2861df8ac7f96effa12aabf2a65b37b409ec237d"} Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.834104 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a25e5137624e1b1bf2375b2861df8ac7f96effa12aabf2a65b37b409ec237d" Mar 14 06:13:22 crc kubenswrapper[4817]: I0314 06:13:22.834178 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.040195 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g7866"] Mar 14 06:13:23 crc kubenswrapper[4817]: E0314 06:13:23.040632 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.040646 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.040815 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.041641 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.049724 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.050249 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.050351 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.050374 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.053262 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.064068 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g7866"] Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.148074 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgnx\" (UniqueName: \"kubernetes.io/projected/9f8afb0b-9422-463a-86d7-9c59bcfac32f-kube-api-access-rqgnx\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.148141 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.148265 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ceph\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.148288 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.250615 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ceph\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.250976 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.251094 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgnx\" (UniqueName: \"kubernetes.io/projected/9f8afb0b-9422-463a-86d7-9c59bcfac32f-kube-api-access-rqgnx\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.251186 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.256452 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ceph\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.256847 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.259694 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.270693 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgnx\" (UniqueName: \"kubernetes.io/projected/9f8afb0b-9422-463a-86d7-9c59bcfac32f-kube-api-access-rqgnx\") pod \"ssh-known-hosts-edpm-deployment-g7866\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.365068 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.928590 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g7866"] Mar 14 06:13:23 crc kubenswrapper[4817]: I0314 06:13:23.934019 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:13:24 crc kubenswrapper[4817]: I0314 06:13:24.859852 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" event={"ID":"9f8afb0b-9422-463a-86d7-9c59bcfac32f","Type":"ContainerStarted","Data":"9bb75ac67bb4664d7cf075508acd3400c70d7012841140f885e29ad4eb1948de"} Mar 14 06:13:24 crc kubenswrapper[4817]: I0314 06:13:24.860255 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" event={"ID":"9f8afb0b-9422-463a-86d7-9c59bcfac32f","Type":"ContainerStarted","Data":"d2e1d0d97dd5ea569bfeb48d7908648dc460757ae21faf19627e16ee354fac86"} Mar 14 06:13:24 crc kubenswrapper[4817]: I0314 06:13:24.884798 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" podStartSLOduration=1.448103637 podStartE2EDuration="1.884776713s" podCreationTimestamp="2026-03-14 06:13:23 +0000 UTC" firstStartedPulling="2026-03-14 06:13:23.93374781 +0000 UTC m=+2457.972008566" lastFinishedPulling="2026-03-14 06:13:24.370420896 +0000 UTC m=+2458.408681642" observedRunningTime="2026-03-14 06:13:24.879586855 +0000 UTC m=+2458.917847601" watchObservedRunningTime="2026-03-14 06:13:24.884776713 +0000 UTC m=+2458.923037449" Mar 14 06:13:30 crc kubenswrapper[4817]: I0314 06:13:30.732314 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:13:30 crc kubenswrapper[4817]: E0314 06:13:30.733178 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:13:33 crc kubenswrapper[4817]: I0314 06:13:33.955926 4817 generic.go:334] "Generic (PLEG): container finished" podID="9f8afb0b-9422-463a-86d7-9c59bcfac32f" containerID="9bb75ac67bb4664d7cf075508acd3400c70d7012841140f885e29ad4eb1948de" exitCode=0 Mar 14 06:13:33 crc kubenswrapper[4817]: I0314 06:13:33.956016 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" event={"ID":"9f8afb0b-9422-463a-86d7-9c59bcfac32f","Type":"ContainerDied","Data":"9bb75ac67bb4664d7cf075508acd3400c70d7012841140f885e29ad4eb1948de"} Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.464835 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.616562 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ceph\") pod \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.616708 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgnx\" (UniqueName: \"kubernetes.io/projected/9f8afb0b-9422-463a-86d7-9c59bcfac32f-kube-api-access-rqgnx\") pod \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.616783 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ssh-key-openstack-edpm-ipam\") pod \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.616908 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-inventory-0\") pod \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\" (UID: \"9f8afb0b-9422-463a-86d7-9c59bcfac32f\") " Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.622640 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8afb0b-9422-463a-86d7-9c59bcfac32f-kube-api-access-rqgnx" (OuterVolumeSpecName: "kube-api-access-rqgnx") pod "9f8afb0b-9422-463a-86d7-9c59bcfac32f" (UID: "9f8afb0b-9422-463a-86d7-9c59bcfac32f"). InnerVolumeSpecName "kube-api-access-rqgnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.626150 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ceph" (OuterVolumeSpecName: "ceph") pod "9f8afb0b-9422-463a-86d7-9c59bcfac32f" (UID: "9f8afb0b-9422-463a-86d7-9c59bcfac32f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.645474 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f8afb0b-9422-463a-86d7-9c59bcfac32f" (UID: "9f8afb0b-9422-463a-86d7-9c59bcfac32f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.646366 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9f8afb0b-9422-463a-86d7-9c59bcfac32f" (UID: "9f8afb0b-9422-463a-86d7-9c59bcfac32f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.718704 4817 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.718746 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.718755 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgnx\" (UniqueName: \"kubernetes.io/projected/9f8afb0b-9422-463a-86d7-9c59bcfac32f-kube-api-access-rqgnx\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:35 crc kubenswrapper[4817]: I0314 06:13:35.718768 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f8afb0b-9422-463a-86d7-9c59bcfac32f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.230873 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5"] Mar 14 06:13:36 crc kubenswrapper[4817]: E0314 06:13:36.231510 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8afb0b-9422-463a-86d7-9c59bcfac32f" containerName="ssh-known-hosts-edpm-deployment" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.231535 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8afb0b-9422-463a-86d7-9c59bcfac32f" containerName="ssh-known-hosts-edpm-deployment" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.231796 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8afb0b-9422-463a-86d7-9c59bcfac32f" containerName="ssh-known-hosts-edpm-deployment" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.232688 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.233757 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" event={"ID":"9f8afb0b-9422-463a-86d7-9c59bcfac32f","Type":"ContainerDied","Data":"d2e1d0d97dd5ea569bfeb48d7908648dc460757ae21faf19627e16ee354fac86"} Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.233986 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e1d0d97dd5ea569bfeb48d7908648dc460757ae21faf19627e16ee354fac86" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.233957 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g7866" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.241393 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5"] Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.402186 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.402581 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25r8g\" (UniqueName: \"kubernetes.io/projected/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-kube-api-access-25r8g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.402648 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.402677 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.504655 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25r8g\" (UniqueName: \"kubernetes.io/projected/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-kube-api-access-25r8g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.504769 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.504808 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.504861 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.509639 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.510212 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.510408 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.524971 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25r8g\" (UniqueName: \"kubernetes.io/projected/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-kube-api-access-25r8g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h5rd5\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:36 crc kubenswrapper[4817]: I0314 06:13:36.564283 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:37 crc kubenswrapper[4817]: I0314 06:13:37.144642 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5"] Mar 14 06:13:37 crc kubenswrapper[4817]: W0314 06:13:37.151801 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ccb7da0_02de_4f65_9b76_6c8c0a47a34e.slice/crio-6d1f0b93f6245e139e703e4669f7e3ef2d1f0ceb8af4bae90a944696709ec9fe WatchSource:0}: Error finding container 6d1f0b93f6245e139e703e4669f7e3ef2d1f0ceb8af4bae90a944696709ec9fe: Status 404 returned error can't find the container with id 6d1f0b93f6245e139e703e4669f7e3ef2d1f0ceb8af4bae90a944696709ec9fe Mar 14 06:13:37 crc kubenswrapper[4817]: I0314 06:13:37.243627 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" event={"ID":"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e","Type":"ContainerStarted","Data":"6d1f0b93f6245e139e703e4669f7e3ef2d1f0ceb8af4bae90a944696709ec9fe"} Mar 14 06:13:38 crc kubenswrapper[4817]: I0314 06:13:38.252960 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" event={"ID":"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e","Type":"ContainerStarted","Data":"8e590c8a34cf3874d95672d61d00aea9dc6b4575f18ecc434a5eadaadbaf0833"} Mar 14 06:13:38 crc kubenswrapper[4817]: I0314 06:13:38.279018 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" podStartSLOduration=1.841810337 podStartE2EDuration="2.278998738s" podCreationTimestamp="2026-03-14 06:13:36 +0000 UTC" firstStartedPulling="2026-03-14 06:13:37.154688365 +0000 UTC m=+2471.192949101" lastFinishedPulling="2026-03-14 06:13:37.591876756 +0000 UTC m=+2471.630137502" observedRunningTime="2026-03-14 06:13:38.276123367 +0000 UTC m=+2472.314384113" watchObservedRunningTime="2026-03-14 06:13:38.278998738 +0000 UTC m=+2472.317259484" Mar 14 06:13:44 crc kubenswrapper[4817]: I0314 06:13:44.732627 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:13:44 crc kubenswrapper[4817]: E0314 06:13:44.733562 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:13:45 crc kubenswrapper[4817]: I0314 06:13:45.314454 4817 generic.go:334] "Generic (PLEG): container finished" podID="9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" containerID="8e590c8a34cf3874d95672d61d00aea9dc6b4575f18ecc434a5eadaadbaf0833" exitCode=0 Mar 14 06:13:45 crc kubenswrapper[4817]: I0314 06:13:45.314504 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" event={"ID":"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e","Type":"ContainerDied","Data":"8e590c8a34cf3874d95672d61d00aea9dc6b4575f18ecc434a5eadaadbaf0833"} Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.790817 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.917950 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ssh-key-openstack-edpm-ipam\") pod \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.918209 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-inventory\") pod \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.918255 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ceph\") pod \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.918423 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25r8g\" (UniqueName: \"kubernetes.io/projected/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-kube-api-access-25r8g\") pod \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\" (UID: \"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e\") " Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.927317 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ceph" (OuterVolumeSpecName: "ceph") pod "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" (UID: "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.927372 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-kube-api-access-25r8g" (OuterVolumeSpecName: "kube-api-access-25r8g") pod "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" (UID: "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e"). InnerVolumeSpecName "kube-api-access-25r8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.946120 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-inventory" (OuterVolumeSpecName: "inventory") pod "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" (UID: "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:46 crc kubenswrapper[4817]: I0314 06:13:46.955185 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" (UID: "9ccb7da0-02de-4f65-9b76-6c8c0a47a34e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.021275 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25r8g\" (UniqueName: \"kubernetes.io/projected/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-kube-api-access-25r8g\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.021328 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.021344 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.021359 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ccb7da0-02de-4f65-9b76-6c8c0a47a34e-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.335585 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" event={"ID":"9ccb7da0-02de-4f65-9b76-6c8c0a47a34e","Type":"ContainerDied","Data":"6d1f0b93f6245e139e703e4669f7e3ef2d1f0ceb8af4bae90a944696709ec9fe"} Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.335625 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1f0b93f6245e139e703e4669f7e3ef2d1f0ceb8af4bae90a944696709ec9fe" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.335685 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h5rd5" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.421299 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt"] Mar 14 06:13:47 crc kubenswrapper[4817]: E0314 06:13:47.421717 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.421738 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.421942 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccb7da0-02de-4f65-9b76-6c8c0a47a34e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.422501 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.424812 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.425036 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.425050 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.425307 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.425441 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.451067 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt"] Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.532327 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.532445 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsm7\" (UniqueName: \"kubernetes.io/projected/350b95d7-fdff-421f-bb13-2b9b307e0918-kube-api-access-wdsm7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.532522 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.532555 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.634540 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.634626 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.634675 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.634784 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsm7\" (UniqueName: \"kubernetes.io/projected/350b95d7-fdff-421f-bb13-2b9b307e0918-kube-api-access-wdsm7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.638488 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.639293 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.639445 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.674633 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsm7\" (UniqueName: \"kubernetes.io/projected/350b95d7-fdff-421f-bb13-2b9b307e0918-kube-api-access-wdsm7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:47 crc kubenswrapper[4817]: I0314 06:13:47.742252 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:13:48 crc kubenswrapper[4817]: I0314 06:13:48.324816 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt"] Mar 14 06:13:48 crc kubenswrapper[4817]: W0314 06:13:48.329942 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod350b95d7_fdff_421f_bb13_2b9b307e0918.slice/crio-ee58a108f252ce5fad1fc45af6c40034e623ec9ed230b6d80d09b5a2e46013ca WatchSource:0}: Error finding container ee58a108f252ce5fad1fc45af6c40034e623ec9ed230b6d80d09b5a2e46013ca: Status 404 returned error can't find the container with id ee58a108f252ce5fad1fc45af6c40034e623ec9ed230b6d80d09b5a2e46013ca Mar 14 06:13:48 crc kubenswrapper[4817]: I0314 06:13:48.344509 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" event={"ID":"350b95d7-fdff-421f-bb13-2b9b307e0918","Type":"ContainerStarted","Data":"ee58a108f252ce5fad1fc45af6c40034e623ec9ed230b6d80d09b5a2e46013ca"} Mar 14 06:13:49 crc kubenswrapper[4817]: I0314 06:13:49.355868 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" event={"ID":"350b95d7-fdff-421f-bb13-2b9b307e0918","Type":"ContainerStarted","Data":"1d2f5ac01ea02b6df4ea65bedd0ba2375f8a41a370c7598f9e465b8bfd80de12"} Mar 14 06:13:49 crc kubenswrapper[4817]: I0314 06:13:49.381507 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" podStartSLOduration=1.701804259 podStartE2EDuration="2.38148665s" podCreationTimestamp="2026-03-14 06:13:47 +0000 UTC" firstStartedPulling="2026-03-14 06:13:48.332906262 +0000 UTC m=+2482.371166998" lastFinishedPulling="2026-03-14 06:13:49.012588643 +0000 UTC m=+2483.050849389" observedRunningTime="2026-03-14 06:13:49.371934969 +0000 UTC m=+2483.410195715" watchObservedRunningTime="2026-03-14 06:13:49.38148665 +0000 UTC m=+2483.419747396" Mar 14 06:13:58 crc kubenswrapper[4817]: I0314 06:13:58.733495 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:13:58 crc kubenswrapper[4817]: E0314 06:13:58.734995 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:13:59 crc kubenswrapper[4817]: I0314 06:13:59.452765 4817 generic.go:334] "Generic (PLEG): container finished" podID="350b95d7-fdff-421f-bb13-2b9b307e0918" containerID="1d2f5ac01ea02b6df4ea65bedd0ba2375f8a41a370c7598f9e465b8bfd80de12" exitCode=0 Mar 14 06:13:59 crc kubenswrapper[4817]: I0314 06:13:59.452837 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" event={"ID":"350b95d7-fdff-421f-bb13-2b9b307e0918","Type":"ContainerDied","Data":"1d2f5ac01ea02b6df4ea65bedd0ba2375f8a41a370c7598f9e465b8bfd80de12"} Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.145835 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557814-npvc7"] Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.149028 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.153924 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.154218 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.155010 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.163675 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557814-npvc7"] Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.238870 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8srsg\" (UniqueName: \"kubernetes.io/projected/f260e502-12c6-4563-980d-06ca27c3f78c-kube-api-access-8srsg\") pod \"auto-csr-approver-29557814-npvc7\" (UID: \"f260e502-12c6-4563-980d-06ca27c3f78c\") " pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.340413 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8srsg\" (UniqueName: \"kubernetes.io/projected/f260e502-12c6-4563-980d-06ca27c3f78c-kube-api-access-8srsg\") pod \"auto-csr-approver-29557814-npvc7\" (UID: \"f260e502-12c6-4563-980d-06ca27c3f78c\") " pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.357803 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8srsg\" (UniqueName: \"kubernetes.io/projected/f260e502-12c6-4563-980d-06ca27c3f78c-kube-api-access-8srsg\") pod \"auto-csr-approver-29557814-npvc7\" (UID: \"f260e502-12c6-4563-980d-06ca27c3f78c\") " pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.487588 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:00 crc kubenswrapper[4817]: I0314 06:14:00.991035 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.075275 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557814-npvc7"] Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.157801 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-inventory\") pod \"350b95d7-fdff-421f-bb13-2b9b307e0918\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.158164 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdsm7\" (UniqueName: \"kubernetes.io/projected/350b95d7-fdff-421f-bb13-2b9b307e0918-kube-api-access-wdsm7\") pod \"350b95d7-fdff-421f-bb13-2b9b307e0918\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.158349 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ssh-key-openstack-edpm-ipam\") pod \"350b95d7-fdff-421f-bb13-2b9b307e0918\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.158404 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ceph\") pod \"350b95d7-fdff-421f-bb13-2b9b307e0918\" (UID: \"350b95d7-fdff-421f-bb13-2b9b307e0918\") " Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.164877 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350b95d7-fdff-421f-bb13-2b9b307e0918-kube-api-access-wdsm7" (OuterVolumeSpecName: "kube-api-access-wdsm7") pod "350b95d7-fdff-421f-bb13-2b9b307e0918" (UID: "350b95d7-fdff-421f-bb13-2b9b307e0918"). InnerVolumeSpecName "kube-api-access-wdsm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.165235 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ceph" (OuterVolumeSpecName: "ceph") pod "350b95d7-fdff-421f-bb13-2b9b307e0918" (UID: "350b95d7-fdff-421f-bb13-2b9b307e0918"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.188161 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-inventory" (OuterVolumeSpecName: "inventory") pod "350b95d7-fdff-421f-bb13-2b9b307e0918" (UID: "350b95d7-fdff-421f-bb13-2b9b307e0918"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.191011 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "350b95d7-fdff-421f-bb13-2b9b307e0918" (UID: "350b95d7-fdff-421f-bb13-2b9b307e0918"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.260367 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.260405 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.260420 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350b95d7-fdff-421f-bb13-2b9b307e0918-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.260433 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdsm7\" (UniqueName: \"kubernetes.io/projected/350b95d7-fdff-421f-bb13-2b9b307e0918-kube-api-access-wdsm7\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.475627 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" event={"ID":"350b95d7-fdff-421f-bb13-2b9b307e0918","Type":"ContainerDied","Data":"ee58a108f252ce5fad1fc45af6c40034e623ec9ed230b6d80d09b5a2e46013ca"} Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.475696 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee58a108f252ce5fad1fc45af6c40034e623ec9ed230b6d80d09b5a2e46013ca" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.475691 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.477846 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557814-npvc7" event={"ID":"f260e502-12c6-4563-980d-06ca27c3f78c","Type":"ContainerStarted","Data":"ad548dae64e94284061bbf3184a9d0d512d19793ca0642c541ab6d37929840c9"} Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.578826 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6"] Mar 14 06:14:01 crc kubenswrapper[4817]: E0314 06:14:01.579259 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350b95d7-fdff-421f-bb13-2b9b307e0918" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.579287 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="350b95d7-fdff-421f-bb13-2b9b307e0918" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.579470 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="350b95d7-fdff-421f-bb13-2b9b307e0918" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.580114 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.586844 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.586951 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.587042 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.587116 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.587149 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.587190 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.587261 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.587294 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.601449 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6"] Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.675844 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2p5p\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-kube-api-access-l2p5p\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.675942 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.676033 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.676321 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.676720 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.677050 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.677382 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.677730 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.677958 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.678019 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.678124 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.678174 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.678340 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780078 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780152 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780180 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780235 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780280 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780330 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780468 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2p5p\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-kube-api-access-l2p5p\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780689 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780751 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780793 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780834 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780871 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.780926 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.785853 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.787082 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.787331 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.787507 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.789962 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.791076 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.799420 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.800623 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.801133 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.802397 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.802879 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.809115 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.817021 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2p5p\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-kube-api-access-l2p5p\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:01 crc kubenswrapper[4817]: I0314 06:14:01.903972 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:02 crc kubenswrapper[4817]: I0314 06:14:02.490020 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557814-npvc7" event={"ID":"f260e502-12c6-4563-980d-06ca27c3f78c","Type":"ContainerStarted","Data":"7fe2d5926ca53da989aaefeada3baa11deaefb2ad23cfca9a41c23a27240f614"} Mar 14 06:14:02 crc kubenswrapper[4817]: I0314 06:14:02.524950 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557814-npvc7" podStartSLOduration=1.546832475 podStartE2EDuration="2.524929669s" podCreationTimestamp="2026-03-14 06:14:00 +0000 UTC" firstStartedPulling="2026-03-14 06:14:01.082948086 +0000 UTC m=+2495.121208842" lastFinishedPulling="2026-03-14 06:14:02.0610453 +0000 UTC m=+2496.099306036" observedRunningTime="2026-03-14 06:14:02.515279144 +0000 UTC m=+2496.553539900" watchObservedRunningTime="2026-03-14 06:14:02.524929669 +0000 UTC m=+2496.563190415" Mar 14 06:14:02 crc kubenswrapper[4817]: I0314 06:14:02.625083 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6"] Mar 14 06:14:03 crc kubenswrapper[4817]: I0314 06:14:03.502396 4817 generic.go:334] "Generic (PLEG): container finished" podID="f260e502-12c6-4563-980d-06ca27c3f78c" containerID="7fe2d5926ca53da989aaefeada3baa11deaefb2ad23cfca9a41c23a27240f614" exitCode=0 Mar 14 06:14:03 crc kubenswrapper[4817]: I0314 06:14:03.502507 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557814-npvc7" event={"ID":"f260e502-12c6-4563-980d-06ca27c3f78c","Type":"ContainerDied","Data":"7fe2d5926ca53da989aaefeada3baa11deaefb2ad23cfca9a41c23a27240f614"} Mar 14 06:14:03 crc kubenswrapper[4817]: I0314 06:14:03.504716 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" event={"ID":"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f","Type":"ContainerStarted","Data":"bbace0404203e411b7d8b6dea5e44b6d20f9099dfc7ce1f423cba341409cd411"} Mar 14 06:14:03 crc kubenswrapper[4817]: I0314 06:14:03.504777 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" event={"ID":"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f","Type":"ContainerStarted","Data":"fb4ef1b11f4f1d15f0538b6e155ac82aa3a94deb876ab6d3e31adfebea26e5f5"} Mar 14 06:14:03 crc kubenswrapper[4817]: I0314 06:14:03.550159 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" podStartSLOduration=2.110358157 podStartE2EDuration="2.550125942s" podCreationTimestamp="2026-03-14 06:14:01 +0000 UTC" firstStartedPulling="2026-03-14 06:14:02.647488966 +0000 UTC m=+2496.685749712" lastFinishedPulling="2026-03-14 06:14:03.087256741 +0000 UTC m=+2497.125517497" observedRunningTime="2026-03-14 06:14:03.53704733 +0000 UTC m=+2497.575308076" watchObservedRunningTime="2026-03-14 06:14:03.550125942 +0000 UTC m=+2497.588386678" Mar 14 06:14:04 crc kubenswrapper[4817]: I0314 06:14:04.863863 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:04 crc kubenswrapper[4817]: I0314 06:14:04.972112 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8srsg\" (UniqueName: \"kubernetes.io/projected/f260e502-12c6-4563-980d-06ca27c3f78c-kube-api-access-8srsg\") pod \"f260e502-12c6-4563-980d-06ca27c3f78c\" (UID: \"f260e502-12c6-4563-980d-06ca27c3f78c\") " Mar 14 06:14:04 crc kubenswrapper[4817]: I0314 06:14:04.984334 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f260e502-12c6-4563-980d-06ca27c3f78c-kube-api-access-8srsg" (OuterVolumeSpecName: "kube-api-access-8srsg") pod "f260e502-12c6-4563-980d-06ca27c3f78c" (UID: "f260e502-12c6-4563-980d-06ca27c3f78c"). InnerVolumeSpecName "kube-api-access-8srsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:05 crc kubenswrapper[4817]: I0314 06:14:05.074724 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8srsg\" (UniqueName: \"kubernetes.io/projected/f260e502-12c6-4563-980d-06ca27c3f78c-kube-api-access-8srsg\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:05 crc kubenswrapper[4817]: I0314 06:14:05.531258 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557814-npvc7" event={"ID":"f260e502-12c6-4563-980d-06ca27c3f78c","Type":"ContainerDied","Data":"ad548dae64e94284061bbf3184a9d0d512d19793ca0642c541ab6d37929840c9"} Mar 14 06:14:05 crc kubenswrapper[4817]: I0314 06:14:05.531752 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad548dae64e94284061bbf3184a9d0d512d19793ca0642c541ab6d37929840c9" Mar 14 06:14:05 crc kubenswrapper[4817]: I0314 06:14:05.531347 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557814-npvc7" Mar 14 06:14:05 crc kubenswrapper[4817]: I0314 06:14:05.970858 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557808-l7twk"] Mar 14 06:14:05 crc kubenswrapper[4817]: I0314 06:14:05.980506 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557808-l7twk"] Mar 14 06:14:06 crc kubenswrapper[4817]: I0314 06:14:06.757163 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83371b58-2667-4d13-a275-cf93548d7d0f" path="/var/lib/kubelet/pods/83371b58-2667-4d13-a275-cf93548d7d0f/volumes" Mar 14 06:14:10 crc kubenswrapper[4817]: I0314 06:14:10.732991 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:14:10 crc kubenswrapper[4817]: E0314 06:14:10.733981 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:14:24 crc kubenswrapper[4817]: I0314 06:14:24.732861 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:14:24 crc kubenswrapper[4817]: E0314 06:14:24.735935 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:14:32 crc kubenswrapper[4817]: I0314 06:14:32.835069 4817 generic.go:334] "Generic (PLEG): container finished" podID="273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" containerID="bbace0404203e411b7d8b6dea5e44b6d20f9099dfc7ce1f423cba341409cd411" exitCode=0 Mar 14 06:14:32 crc kubenswrapper[4817]: I0314 06:14:32.835176 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" event={"ID":"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f","Type":"ContainerDied","Data":"bbace0404203e411b7d8b6dea5e44b6d20f9099dfc7ce1f423cba341409cd411"} Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.256237 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.328655 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ceph\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.328732 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-bootstrap-combined-ca-bundle\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.328823 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.328911 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2p5p\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-kube-api-access-l2p5p\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.328945 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329002 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-neutron-metadata-combined-ca-bundle\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329034 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-repo-setup-combined-ca-bundle\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329075 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-inventory\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329097 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ssh-key-openstack-edpm-ipam\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329125 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-nova-combined-ca-bundle\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329191 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ovn-combined-ca-bundle\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329224 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-libvirt-combined-ca-bundle\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.329278 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\" (UID: \"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f\") " Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.338920 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.339697 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.339970 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.338221 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.340553 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ceph" (OuterVolumeSpecName: "ceph") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.343564 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.345794 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.348142 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.353105 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.358143 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-kube-api-access-l2p5p" (OuterVolumeSpecName: "kube-api-access-l2p5p") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "kube-api-access-l2p5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.366053 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.379763 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.389549 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-inventory" (OuterVolumeSpecName: "inventory") pod "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" (UID: "273c6104-8daf-4e5e-b87e-aaf48ee8ae1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434113 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434165 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2p5p\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-kube-api-access-l2p5p\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434184 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434204 4817 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434223 4817 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434248 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434266 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434284 4817 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434303 4817 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434320 4817 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434337 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434355 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.434373 4817 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273c6104-8daf-4e5e-b87e-aaf48ee8ae1f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.858481 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" event={"ID":"273c6104-8daf-4e5e-b87e-aaf48ee8ae1f","Type":"ContainerDied","Data":"fb4ef1b11f4f1d15f0538b6e155ac82aa3a94deb876ab6d3e31adfebea26e5f5"} Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.858556 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4ef1b11f4f1d15f0538b6e155ac82aa3a94deb876ab6d3e31adfebea26e5f5" Mar 14 06:14:34 crc kubenswrapper[4817]: I0314 06:14:34.858731 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.025973 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng"] Mar 14 06:14:35 crc kubenswrapper[4817]: E0314 06:14:35.026475 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f260e502-12c6-4563-980d-06ca27c3f78c" containerName="oc" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.026495 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f260e502-12c6-4563-980d-06ca27c3f78c" containerName="oc" Mar 14 06:14:35 crc kubenswrapper[4817]: E0314 06:14:35.026520 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.026532 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.026730 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="273c6104-8daf-4e5e-b87e-aaf48ee8ae1f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.026753 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f260e502-12c6-4563-980d-06ca27c3f78c" containerName="oc" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.027493 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.030928 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.032699 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.033054 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.033222 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.033479 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.052532 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng"] Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.152150 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.152246 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.152645 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vt6\" (UniqueName: \"kubernetes.io/projected/b24eeb13-77b4-4662-90f0-933ae091cfe2-kube-api-access-97vt6\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.153004 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.255393 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.255502 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.255598 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.255814 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97vt6\" (UniqueName: \"kubernetes.io/projected/b24eeb13-77b4-4662-90f0-933ae091cfe2-kube-api-access-97vt6\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.268924 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.268949 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.269194 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.285017 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vt6\" (UniqueName: \"kubernetes.io/projected/b24eeb13-77b4-4662-90f0-933ae091cfe2-kube-api-access-97vt6\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-knzng\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.352921 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:35 crc kubenswrapper[4817]: I0314 06:14:35.909516 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng"] Mar 14 06:14:36 crc kubenswrapper[4817]: I0314 06:14:36.738389 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:14:36 crc kubenswrapper[4817]: E0314 06:14:36.738921 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:14:36 crc kubenswrapper[4817]: I0314 06:14:36.979165 4817 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtb86 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.74:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 06:14:36 crc kubenswrapper[4817]: I0314 06:14:36.979687 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" podUID="22e59375-f50e-4050-aeeb-a305ffcb3572" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.74:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 06:14:36 crc kubenswrapper[4817]: I0314 06:14:36.979163 4817 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtb86 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.74:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 06:14:36 crc kubenswrapper[4817]: I0314 06:14:36.979842 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gtb86" podUID="22e59375-f50e-4050-aeeb-a305ffcb3572" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.74:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 06:14:38 crc kubenswrapper[4817]: I0314 06:14:38.094544 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" event={"ID":"b24eeb13-77b4-4662-90f0-933ae091cfe2","Type":"ContainerStarted","Data":"f5ad6d931b4f341f183e262f91d685f2c77a685d8058fe50225a7098d9e87f72"} Mar 14 06:14:38 crc kubenswrapper[4817]: I0314 06:14:38.094592 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" event={"ID":"b24eeb13-77b4-4662-90f0-933ae091cfe2","Type":"ContainerStarted","Data":"7b30cc4e957c8cf4e2b70ff16ecbde5ad31d83d97cef4bfa764bfab4aea10590"} Mar 14 06:14:38 crc kubenswrapper[4817]: I0314 06:14:38.122637 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" podStartSLOduration=3.724059751 podStartE2EDuration="4.122618482s" podCreationTimestamp="2026-03-14 06:14:34 +0000 UTC" firstStartedPulling="2026-03-14 06:14:37.078515602 +0000 UTC m=+2531.116776358" lastFinishedPulling="2026-03-14 06:14:37.477074343 +0000 UTC m=+2531.515335089" observedRunningTime="2026-03-14 06:14:38.118930627 +0000 UTC m=+2532.157191393" watchObservedRunningTime="2026-03-14 06:14:38.122618482 +0000 UTC m=+2532.160879228" Mar 14 06:14:43 crc kubenswrapper[4817]: I0314 06:14:43.143304 4817 generic.go:334] "Generic (PLEG): container finished" podID="b24eeb13-77b4-4662-90f0-933ae091cfe2" containerID="f5ad6d931b4f341f183e262f91d685f2c77a685d8058fe50225a7098d9e87f72" exitCode=0 Mar 14 06:14:43 crc kubenswrapper[4817]: I0314 06:14:43.143455 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" event={"ID":"b24eeb13-77b4-4662-90f0-933ae091cfe2","Type":"ContainerDied","Data":"f5ad6d931b4f341f183e262f91d685f2c77a685d8058fe50225a7098d9e87f72"} Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.668775 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.788624 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ceph\") pod \"b24eeb13-77b4-4662-90f0-933ae091cfe2\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.788738 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97vt6\" (UniqueName: \"kubernetes.io/projected/b24eeb13-77b4-4662-90f0-933ae091cfe2-kube-api-access-97vt6\") pod \"b24eeb13-77b4-4662-90f0-933ae091cfe2\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.789001 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ssh-key-openstack-edpm-ipam\") pod \"b24eeb13-77b4-4662-90f0-933ae091cfe2\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.789131 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-inventory\") pod \"b24eeb13-77b4-4662-90f0-933ae091cfe2\" (UID: \"b24eeb13-77b4-4662-90f0-933ae091cfe2\") " Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.799238 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ceph" (OuterVolumeSpecName: "ceph") pod "b24eeb13-77b4-4662-90f0-933ae091cfe2" (UID: "b24eeb13-77b4-4662-90f0-933ae091cfe2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.802228 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24eeb13-77b4-4662-90f0-933ae091cfe2-kube-api-access-97vt6" (OuterVolumeSpecName: "kube-api-access-97vt6") pod "b24eeb13-77b4-4662-90f0-933ae091cfe2" (UID: "b24eeb13-77b4-4662-90f0-933ae091cfe2"). InnerVolumeSpecName "kube-api-access-97vt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.816473 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b24eeb13-77b4-4662-90f0-933ae091cfe2" (UID: "b24eeb13-77b4-4662-90f0-933ae091cfe2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.828122 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-inventory" (OuterVolumeSpecName: "inventory") pod "b24eeb13-77b4-4662-90f0-933ae091cfe2" (UID: "b24eeb13-77b4-4662-90f0-933ae091cfe2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.893530 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.894101 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.894283 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b24eeb13-77b4-4662-90f0-933ae091cfe2-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.894448 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97vt6\" (UniqueName: \"kubernetes.io/projected/b24eeb13-77b4-4662-90f0-933ae091cfe2-kube-api-access-97vt6\") on node \"crc\" DevicePath \"\"" Mar 14 06:14:44 crc kubenswrapper[4817]: I0314 06:14:44.911417 4817 scope.go:117] "RemoveContainer" containerID="01bcc4033880bcc7c0164eeb69ab72dee709fb945677da0c5ecafadeb44e8bca" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.166862 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" event={"ID":"b24eeb13-77b4-4662-90f0-933ae091cfe2","Type":"ContainerDied","Data":"7b30cc4e957c8cf4e2b70ff16ecbde5ad31d83d97cef4bfa764bfab4aea10590"} Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.167241 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b30cc4e957c8cf4e2b70ff16ecbde5ad31d83d97cef4bfa764bfab4aea10590" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.166980 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-knzng" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.290693 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk"] Mar 14 06:14:45 crc kubenswrapper[4817]: E0314 06:14:45.291699 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24eeb13-77b4-4662-90f0-933ae091cfe2" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.291725 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24eeb13-77b4-4662-90f0-933ae091cfe2" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.292030 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24eeb13-77b4-4662-90f0-933ae091cfe2" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.292956 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.294531 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.295942 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.296087 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.296350 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.297362 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.297960 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.313750 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk"] Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.411126 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.411180 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd55a087-6a2d-4515-9774-96d247a25d52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.411205 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.411236 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b4ld\" (UniqueName: \"kubernetes.io/projected/dd55a087-6a2d-4515-9774-96d247a25d52-kube-api-access-6b4ld\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.411501 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.411576 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.514045 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.514126 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.514159 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd55a087-6a2d-4515-9774-96d247a25d52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.514179 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.514208 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b4ld\" (UniqueName: \"kubernetes.io/projected/dd55a087-6a2d-4515-9774-96d247a25d52-kube-api-access-6b4ld\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.514245 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.515973 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd55a087-6a2d-4515-9774-96d247a25d52-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.520072 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.520231 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.520749 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.522322 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.540053 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b4ld\" (UniqueName: \"kubernetes.io/projected/dd55a087-6a2d-4515-9774-96d247a25d52-kube-api-access-6b4ld\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-52cpk\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:45 crc kubenswrapper[4817]: I0314 06:14:45.615876 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:14:46 crc kubenswrapper[4817]: I0314 06:14:46.193216 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk"] Mar 14 06:14:47 crc kubenswrapper[4817]: I0314 06:14:47.187455 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" event={"ID":"dd55a087-6a2d-4515-9774-96d247a25d52","Type":"ContainerStarted","Data":"8321d233d454694029bfa61e9f2b45708be2e69d9254bf50f77021abd37051f8"} Mar 14 06:14:47 crc kubenswrapper[4817]: I0314 06:14:47.187995 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" event={"ID":"dd55a087-6a2d-4515-9774-96d247a25d52","Type":"ContainerStarted","Data":"b76566bc9b2bc0681f83e62ecb53a0d8740d963da2cb3f19a4cbef024654e004"} Mar 14 06:14:47 crc kubenswrapper[4817]: I0314 06:14:47.211591 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" podStartSLOduration=1.690203041 podStartE2EDuration="2.211561797s" podCreationTimestamp="2026-03-14 06:14:45 +0000 UTC" firstStartedPulling="2026-03-14 06:14:46.203624235 +0000 UTC m=+2540.241884981" lastFinishedPulling="2026-03-14 06:14:46.724982981 +0000 UTC m=+2540.763243737" observedRunningTime="2026-03-14 06:14:47.205414722 +0000 UTC m=+2541.243675478" watchObservedRunningTime="2026-03-14 06:14:47.211561797 +0000 UTC m=+2541.249822543" Mar 14 06:14:48 crc kubenswrapper[4817]: I0314 06:14:48.732742 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:14:48 crc kubenswrapper[4817]: E0314 06:14:48.733927 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.164656 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9"] Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.168869 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.170877 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.173170 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.176723 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9"] Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.291087 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a64feffd-b411-4202-be0f-05aa1181f11b-config-volume\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.291188 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx7jp\" (UniqueName: \"kubernetes.io/projected/a64feffd-b411-4202-be0f-05aa1181f11b-kube-api-access-dx7jp\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.291292 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a64feffd-b411-4202-be0f-05aa1181f11b-secret-volume\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.393809 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx7jp\" (UniqueName: \"kubernetes.io/projected/a64feffd-b411-4202-be0f-05aa1181f11b-kube-api-access-dx7jp\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.394005 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a64feffd-b411-4202-be0f-05aa1181f11b-secret-volume\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.394115 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a64feffd-b411-4202-be0f-05aa1181f11b-config-volume\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.396813 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a64feffd-b411-4202-be0f-05aa1181f11b-config-volume\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.402097 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a64feffd-b411-4202-be0f-05aa1181f11b-secret-volume\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.418280 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx7jp\" (UniqueName: \"kubernetes.io/projected/a64feffd-b411-4202-be0f-05aa1181f11b-kube-api-access-dx7jp\") pod \"collect-profiles-29557815-7t7h9\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.498593 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:00 crc kubenswrapper[4817]: I0314 06:15:00.986384 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9"] Mar 14 06:15:01 crc kubenswrapper[4817]: I0314 06:15:01.327084 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" event={"ID":"a64feffd-b411-4202-be0f-05aa1181f11b","Type":"ContainerStarted","Data":"8d2949f5cc37c3957d9aab0b7ab2925f769a3f62881d294f784abb7157017909"} Mar 14 06:15:01 crc kubenswrapper[4817]: I0314 06:15:01.327201 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" event={"ID":"a64feffd-b411-4202-be0f-05aa1181f11b","Type":"ContainerStarted","Data":"0b021a1c8ee6333ae7249cded7cb5301fe65fb0db780311ac832e992cc956424"} Mar 14 06:15:01 crc kubenswrapper[4817]: I0314 06:15:01.355697 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" podStartSLOduration=1.3556678 podStartE2EDuration="1.3556678s" podCreationTimestamp="2026-03-14 06:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:15:01.348070584 +0000 UTC m=+2555.386331350" watchObservedRunningTime="2026-03-14 06:15:01.3556678 +0000 UTC m=+2555.393928546" Mar 14 06:15:01 crc kubenswrapper[4817]: I0314 06:15:01.732364 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:15:01 crc kubenswrapper[4817]: E0314 06:15:01.733125 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:15:02 crc kubenswrapper[4817]: I0314 06:15:02.339382 4817 generic.go:334] "Generic (PLEG): container finished" podID="a64feffd-b411-4202-be0f-05aa1181f11b" containerID="8d2949f5cc37c3957d9aab0b7ab2925f769a3f62881d294f784abb7157017909" exitCode=0 Mar 14 06:15:02 crc kubenswrapper[4817]: I0314 06:15:02.339453 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" event={"ID":"a64feffd-b411-4202-be0f-05aa1181f11b","Type":"ContainerDied","Data":"8d2949f5cc37c3957d9aab0b7ab2925f769a3f62881d294f784abb7157017909"} Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.760260 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.874548 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a64feffd-b411-4202-be0f-05aa1181f11b-config-volume\") pod \"a64feffd-b411-4202-be0f-05aa1181f11b\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.875345 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a64feffd-b411-4202-be0f-05aa1181f11b-secret-volume\") pod \"a64feffd-b411-4202-be0f-05aa1181f11b\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.875400 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx7jp\" (UniqueName: \"kubernetes.io/projected/a64feffd-b411-4202-be0f-05aa1181f11b-kube-api-access-dx7jp\") pod \"a64feffd-b411-4202-be0f-05aa1181f11b\" (UID: \"a64feffd-b411-4202-be0f-05aa1181f11b\") " Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.875948 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a64feffd-b411-4202-be0f-05aa1181f11b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a64feffd-b411-4202-be0f-05aa1181f11b" (UID: "a64feffd-b411-4202-be0f-05aa1181f11b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.876958 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a64feffd-b411-4202-be0f-05aa1181f11b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.882820 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64feffd-b411-4202-be0f-05aa1181f11b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a64feffd-b411-4202-be0f-05aa1181f11b" (UID: "a64feffd-b411-4202-be0f-05aa1181f11b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.884380 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64feffd-b411-4202-be0f-05aa1181f11b-kube-api-access-dx7jp" (OuterVolumeSpecName: "kube-api-access-dx7jp") pod "a64feffd-b411-4202-be0f-05aa1181f11b" (UID: "a64feffd-b411-4202-be0f-05aa1181f11b"). InnerVolumeSpecName "kube-api-access-dx7jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.978600 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a64feffd-b411-4202-be0f-05aa1181f11b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:03 crc kubenswrapper[4817]: I0314 06:15:03.978635 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx7jp\" (UniqueName: \"kubernetes.io/projected/a64feffd-b411-4202-be0f-05aa1181f11b-kube-api-access-dx7jp\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:04 crc kubenswrapper[4817]: I0314 06:15:04.359564 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" event={"ID":"a64feffd-b411-4202-be0f-05aa1181f11b","Type":"ContainerDied","Data":"0b021a1c8ee6333ae7249cded7cb5301fe65fb0db780311ac832e992cc956424"} Mar 14 06:15:04 crc kubenswrapper[4817]: I0314 06:15:04.359627 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b021a1c8ee6333ae7249cded7cb5301fe65fb0db780311ac832e992cc956424" Mar 14 06:15:04 crc kubenswrapper[4817]: I0314 06:15:04.359713 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9" Mar 14 06:15:04 crc kubenswrapper[4817]: I0314 06:15:04.444789 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz"] Mar 14 06:15:04 crc kubenswrapper[4817]: I0314 06:15:04.454289 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557770-ng6bz"] Mar 14 06:15:04 crc kubenswrapper[4817]: I0314 06:15:04.752396 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb37610-64a2-43d9-98b1-513a60b6de4d" path="/var/lib/kubelet/pods/cdb37610-64a2-43d9-98b1-513a60b6de4d/volumes" Mar 14 06:15:14 crc kubenswrapper[4817]: I0314 06:15:14.732172 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:15:14 crc kubenswrapper[4817]: E0314 06:15:14.733338 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:15:27 crc kubenswrapper[4817]: I0314 06:15:27.732501 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:15:27 crc kubenswrapper[4817]: E0314 06:15:27.733735 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:15:40 crc kubenswrapper[4817]: I0314 06:15:40.732322 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:15:40 crc kubenswrapper[4817]: E0314 06:15:40.733222 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:15:45 crc kubenswrapper[4817]: I0314 06:15:44.999432 4817 scope.go:117] "RemoveContainer" containerID="5a8f548a937ceb9117b83d0d4cf8205c0bbaedacf7c23ae3253e1ef339153343" Mar 14 06:15:53 crc kubenswrapper[4817]: I0314 06:15:53.732565 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:15:53 crc kubenswrapper[4817]: E0314 06:15:53.733513 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:15:53 crc kubenswrapper[4817]: I0314 06:15:53.839781 4817 generic.go:334] "Generic (PLEG): container finished" podID="dd55a087-6a2d-4515-9774-96d247a25d52" containerID="8321d233d454694029bfa61e9f2b45708be2e69d9254bf50f77021abd37051f8" exitCode=0 Mar 14 06:15:53 crc kubenswrapper[4817]: I0314 06:15:53.839882 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" event={"ID":"dd55a087-6a2d-4515-9774-96d247a25d52","Type":"ContainerDied","Data":"8321d233d454694029bfa61e9f2b45708be2e69d9254bf50f77021abd37051f8"} Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.279130 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.282523 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-inventory\") pod \"dd55a087-6a2d-4515-9774-96d247a25d52\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.282564 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ssh-key-openstack-edpm-ipam\") pod \"dd55a087-6a2d-4515-9774-96d247a25d52\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.282590 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b4ld\" (UniqueName: \"kubernetes.io/projected/dd55a087-6a2d-4515-9774-96d247a25d52-kube-api-access-6b4ld\") pod \"dd55a087-6a2d-4515-9774-96d247a25d52\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.282625 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ceph\") pod \"dd55a087-6a2d-4515-9774-96d247a25d52\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.282714 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd55a087-6a2d-4515-9774-96d247a25d52-ovncontroller-config-0\") pod \"dd55a087-6a2d-4515-9774-96d247a25d52\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.282734 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ovn-combined-ca-bundle\") pod \"dd55a087-6a2d-4515-9774-96d247a25d52\" (UID: \"dd55a087-6a2d-4515-9774-96d247a25d52\") " Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.289837 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dd55a087-6a2d-4515-9774-96d247a25d52" (UID: "dd55a087-6a2d-4515-9774-96d247a25d52"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.289857 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd55a087-6a2d-4515-9774-96d247a25d52-kube-api-access-6b4ld" (OuterVolumeSpecName: "kube-api-access-6b4ld") pod "dd55a087-6a2d-4515-9774-96d247a25d52" (UID: "dd55a087-6a2d-4515-9774-96d247a25d52"). InnerVolumeSpecName "kube-api-access-6b4ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.290271 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ceph" (OuterVolumeSpecName: "ceph") pod "dd55a087-6a2d-4515-9774-96d247a25d52" (UID: "dd55a087-6a2d-4515-9774-96d247a25d52"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.321683 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-inventory" (OuterVolumeSpecName: "inventory") pod "dd55a087-6a2d-4515-9774-96d247a25d52" (UID: "dd55a087-6a2d-4515-9774-96d247a25d52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.323620 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd55a087-6a2d-4515-9774-96d247a25d52-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "dd55a087-6a2d-4515-9774-96d247a25d52" (UID: "dd55a087-6a2d-4515-9774-96d247a25d52"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.329476 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd55a087-6a2d-4515-9774-96d247a25d52" (UID: "dd55a087-6a2d-4515-9774-96d247a25d52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.386309 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.386344 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.386360 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b4ld\" (UniqueName: \"kubernetes.io/projected/dd55a087-6a2d-4515-9774-96d247a25d52-kube-api-access-6b4ld\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.386371 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.386381 4817 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd55a087-6a2d-4515-9774-96d247a25d52-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.386391 4817 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd55a087-6a2d-4515-9774-96d247a25d52-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.861313 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" event={"ID":"dd55a087-6a2d-4515-9774-96d247a25d52","Type":"ContainerDied","Data":"b76566bc9b2bc0681f83e62ecb53a0d8740d963da2cb3f19a4cbef024654e004"} Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.861363 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76566bc9b2bc0681f83e62ecb53a0d8740d963da2cb3f19a4cbef024654e004" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.861380 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-52cpk" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.997757 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s"] Mar 14 06:15:55 crc kubenswrapper[4817]: E0314 06:15:55.998320 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64feffd-b411-4202-be0f-05aa1181f11b" containerName="collect-profiles" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.998339 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64feffd-b411-4202-be0f-05aa1181f11b" containerName="collect-profiles" Mar 14 06:15:55 crc kubenswrapper[4817]: E0314 06:15:55.998387 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd55a087-6a2d-4515-9774-96d247a25d52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.998396 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd55a087-6a2d-4515-9774-96d247a25d52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.998616 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64feffd-b411-4202-be0f-05aa1181f11b" containerName="collect-profiles" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.998635 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd55a087-6a2d-4515-9774-96d247a25d52" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 14 06:15:55 crc kubenswrapper[4817]: I0314 06:15:55.999596 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.002436 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.002648 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.002659 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.003348 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.007592 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.007693 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.007950 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.012543 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s"] Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.101936 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.102464 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.102569 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqq6\" (UniqueName: \"kubernetes.io/projected/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-kube-api-access-phqq6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.102837 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.103018 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.103123 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.103160 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.204712 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.204802 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.204832 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.204908 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.204949 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.204998 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqq6\" (UniqueName: \"kubernetes.io/projected/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-kube-api-access-phqq6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.205054 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.211381 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.212106 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.213661 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.213755 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.214456 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.215026 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.236321 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqq6\" (UniqueName: \"kubernetes.io/projected/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-kube-api-access-phqq6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.329670 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.775322 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s"] Mar 14 06:15:56 crc kubenswrapper[4817]: I0314 06:15:56.870317 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" event={"ID":"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91","Type":"ContainerStarted","Data":"727d36f2ba4f6cef03db45a318a1f297293758ebef964508000636416017fa09"} Mar 14 06:15:57 crc kubenswrapper[4817]: I0314 06:15:57.879724 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" event={"ID":"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91","Type":"ContainerStarted","Data":"8e847201ee99f20eea8e2ca8654e069cdf1250a6ef66b9cab8691ddeb18b2be1"} Mar 14 06:15:57 crc kubenswrapper[4817]: I0314 06:15:57.918721 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" podStartSLOduration=2.435344917 podStartE2EDuration="2.918679234s" podCreationTimestamp="2026-03-14 06:15:55 +0000 UTC" firstStartedPulling="2026-03-14 06:15:56.783146693 +0000 UTC m=+2610.821407439" lastFinishedPulling="2026-03-14 06:15:57.266481 +0000 UTC m=+2611.304741756" observedRunningTime="2026-03-14 06:15:57.906453426 +0000 UTC m=+2611.944714182" watchObservedRunningTime="2026-03-14 06:15:57.918679234 +0000 UTC m=+2611.956939980" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.139489 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557816-sjhsj"] Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.141375 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.144174 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.144174 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.144504 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.165778 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557816-sjhsj"] Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.232325 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgp8\" (UniqueName: \"kubernetes.io/projected/1aeec8d2-99b9-4797-b36e-e0344ccc8b19-kube-api-access-kcgp8\") pod \"auto-csr-approver-29557816-sjhsj\" (UID: \"1aeec8d2-99b9-4797-b36e-e0344ccc8b19\") " pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.334135 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgp8\" (UniqueName: \"kubernetes.io/projected/1aeec8d2-99b9-4797-b36e-e0344ccc8b19-kube-api-access-kcgp8\") pod \"auto-csr-approver-29557816-sjhsj\" (UID: \"1aeec8d2-99b9-4797-b36e-e0344ccc8b19\") " pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.353393 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgp8\" (UniqueName: \"kubernetes.io/projected/1aeec8d2-99b9-4797-b36e-e0344ccc8b19-kube-api-access-kcgp8\") pod \"auto-csr-approver-29557816-sjhsj\" (UID: \"1aeec8d2-99b9-4797-b36e-e0344ccc8b19\") " pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.473703 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:00 crc kubenswrapper[4817]: I0314 06:16:00.969383 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557816-sjhsj"] Mar 14 06:16:00 crc kubenswrapper[4817]: W0314 06:16:00.976378 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aeec8d2_99b9_4797_b36e_e0344ccc8b19.slice/crio-1776b8d72f913e7a55b04318d0ed4d4ef7416a2d366dd66e5d7ddd24456533f0 WatchSource:0}: Error finding container 1776b8d72f913e7a55b04318d0ed4d4ef7416a2d366dd66e5d7ddd24456533f0: Status 404 returned error can't find the container with id 1776b8d72f913e7a55b04318d0ed4d4ef7416a2d366dd66e5d7ddd24456533f0 Mar 14 06:16:01 crc kubenswrapper[4817]: I0314 06:16:01.917092 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" event={"ID":"1aeec8d2-99b9-4797-b36e-e0344ccc8b19","Type":"ContainerStarted","Data":"1776b8d72f913e7a55b04318d0ed4d4ef7416a2d366dd66e5d7ddd24456533f0"} Mar 14 06:16:02 crc kubenswrapper[4817]: I0314 06:16:02.927001 4817 generic.go:334] "Generic (PLEG): container finished" podID="1aeec8d2-99b9-4797-b36e-e0344ccc8b19" containerID="b77288481b908d1c7595c2c56820b36446df15976dd19f25c77a258ecf7c8bae" exitCode=0 Mar 14 06:16:02 crc kubenswrapper[4817]: I0314 06:16:02.927096 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" event={"ID":"1aeec8d2-99b9-4797-b36e-e0344ccc8b19","Type":"ContainerDied","Data":"b77288481b908d1c7595c2c56820b36446df15976dd19f25c77a258ecf7c8bae"} Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.315197 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.421585 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgp8\" (UniqueName: \"kubernetes.io/projected/1aeec8d2-99b9-4797-b36e-e0344ccc8b19-kube-api-access-kcgp8\") pod \"1aeec8d2-99b9-4797-b36e-e0344ccc8b19\" (UID: \"1aeec8d2-99b9-4797-b36e-e0344ccc8b19\") " Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.432378 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aeec8d2-99b9-4797-b36e-e0344ccc8b19-kube-api-access-kcgp8" (OuterVolumeSpecName: "kube-api-access-kcgp8") pod "1aeec8d2-99b9-4797-b36e-e0344ccc8b19" (UID: "1aeec8d2-99b9-4797-b36e-e0344ccc8b19"). InnerVolumeSpecName "kube-api-access-kcgp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.524489 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcgp8\" (UniqueName: \"kubernetes.io/projected/1aeec8d2-99b9-4797-b36e-e0344ccc8b19-kube-api-access-kcgp8\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.949712 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" event={"ID":"1aeec8d2-99b9-4797-b36e-e0344ccc8b19","Type":"ContainerDied","Data":"1776b8d72f913e7a55b04318d0ed4d4ef7416a2d366dd66e5d7ddd24456533f0"} Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.949759 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1776b8d72f913e7a55b04318d0ed4d4ef7416a2d366dd66e5d7ddd24456533f0" Mar 14 06:16:04 crc kubenswrapper[4817]: I0314 06:16:04.949777 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557816-sjhsj" Mar 14 06:16:05 crc kubenswrapper[4817]: I0314 06:16:05.404042 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557810-s8npb"] Mar 14 06:16:05 crc kubenswrapper[4817]: I0314 06:16:05.413024 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557810-s8npb"] Mar 14 06:16:06 crc kubenswrapper[4817]: I0314 06:16:06.754489 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:16:06 crc kubenswrapper[4817]: E0314 06:16:06.755394 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:16:06 crc kubenswrapper[4817]: I0314 06:16:06.767494 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6729a15-1d9c-4f1f-961a-07b401796464" path="/var/lib/kubelet/pods/e6729a15-1d9c-4f1f-961a-07b401796464/volumes" Mar 14 06:16:21 crc kubenswrapper[4817]: I0314 06:16:21.732817 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:16:21 crc kubenswrapper[4817]: E0314 06:16:21.734152 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:16:28 crc kubenswrapper[4817]: I0314 06:16:28.870049 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jzxcw"] Mar 14 06:16:28 crc kubenswrapper[4817]: E0314 06:16:28.871551 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aeec8d2-99b9-4797-b36e-e0344ccc8b19" containerName="oc" Mar 14 06:16:28 crc kubenswrapper[4817]: I0314 06:16:28.871575 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aeec8d2-99b9-4797-b36e-e0344ccc8b19" containerName="oc" Mar 14 06:16:28 crc kubenswrapper[4817]: I0314 06:16:28.871872 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aeec8d2-99b9-4797-b36e-e0344ccc8b19" containerName="oc" Mar 14 06:16:28 crc kubenswrapper[4817]: I0314 06:16:28.874291 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:28 crc kubenswrapper[4817]: I0314 06:16:28.907505 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzxcw"] Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.021738 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-catalog-content\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.021820 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q946v\" (UniqueName: \"kubernetes.io/projected/92f95d48-2b39-4d7d-a073-e45824341e69-kube-api-access-q946v\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.021875 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-utilities\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.123936 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-catalog-content\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.124052 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q946v\" (UniqueName: \"kubernetes.io/projected/92f95d48-2b39-4d7d-a073-e45824341e69-kube-api-access-q946v\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.124096 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-utilities\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.124600 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-utilities\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.124600 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-catalog-content\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.159534 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q946v\" (UniqueName: \"kubernetes.io/projected/92f95d48-2b39-4d7d-a073-e45824341e69-kube-api-access-q946v\") pod \"redhat-operators-jzxcw\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.208593 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:29 crc kubenswrapper[4817]: I0314 06:16:29.697736 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzxcw"] Mar 14 06:16:30 crc kubenswrapper[4817]: E0314 06:16:30.044704 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92f95d48_2b39_4d7d_a073_e45824341e69.slice/crio-9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92f95d48_2b39_4d7d_a073_e45824341e69.slice/crio-conmon-9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6.scope\": RecentStats: unable to find data in memory cache]" Mar 14 06:16:30 crc kubenswrapper[4817]: I0314 06:16:30.217997 4817 generic.go:334] "Generic (PLEG): container finished" podID="92f95d48-2b39-4d7d-a073-e45824341e69" containerID="9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6" exitCode=0 Mar 14 06:16:30 crc kubenswrapper[4817]: I0314 06:16:30.218288 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerDied","Data":"9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6"} Mar 14 06:16:30 crc kubenswrapper[4817]: I0314 06:16:30.218315 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerStarted","Data":"96f198d727415ecd6236f2e62d1bcc4bbb4c5644b7951194b206e3f8e8f9532f"} Mar 14 06:16:31 crc kubenswrapper[4817]: I0314 06:16:31.230580 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerStarted","Data":"a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e"} Mar 14 06:16:33 crc kubenswrapper[4817]: I0314 06:16:33.732893 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:16:33 crc kubenswrapper[4817]: E0314 06:16:33.733699 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:16:34 crc kubenswrapper[4817]: I0314 06:16:34.268278 4817 generic.go:334] "Generic (PLEG): container finished" podID="92f95d48-2b39-4d7d-a073-e45824341e69" containerID="a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e" exitCode=0 Mar 14 06:16:34 crc kubenswrapper[4817]: I0314 06:16:34.268334 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerDied","Data":"a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e"} Mar 14 06:16:35 crc kubenswrapper[4817]: I0314 06:16:35.279147 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerStarted","Data":"c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099"} Mar 14 06:16:35 crc kubenswrapper[4817]: I0314 06:16:35.300142 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jzxcw" podStartSLOduration=2.7931676100000002 podStartE2EDuration="7.300116251s" podCreationTimestamp="2026-03-14 06:16:28 +0000 UTC" firstStartedPulling="2026-03-14 06:16:30.221255132 +0000 UTC m=+2644.259515878" lastFinishedPulling="2026-03-14 06:16:34.728203773 +0000 UTC m=+2648.766464519" observedRunningTime="2026-03-14 06:16:35.296478667 +0000 UTC m=+2649.334739423" watchObservedRunningTime="2026-03-14 06:16:35.300116251 +0000 UTC m=+2649.338377007" Mar 14 06:16:39 crc kubenswrapper[4817]: I0314 06:16:39.209389 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:39 crc kubenswrapper[4817]: I0314 06:16:39.210168 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:40 crc kubenswrapper[4817]: I0314 06:16:40.254815 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzxcw" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="registry-server" probeResult="failure" output=< Mar 14 06:16:40 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 06:16:40 crc kubenswrapper[4817]: > Mar 14 06:16:45 crc kubenswrapper[4817]: I0314 06:16:45.075754 4817 scope.go:117] "RemoveContainer" containerID="d28077df329f8a7ca712d04e7e05ebd2531e7d74325276fc490c34dfb5cd39a0" Mar 14 06:16:45 crc kubenswrapper[4817]: I0314 06:16:45.732790 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:16:45 crc kubenswrapper[4817]: E0314 06:16:45.733783 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:16:49 crc kubenswrapper[4817]: I0314 06:16:49.269213 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:49 crc kubenswrapper[4817]: I0314 06:16:49.327654 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:50 crc kubenswrapper[4817]: I0314 06:16:50.387962 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzxcw"] Mar 14 06:16:50 crc kubenswrapper[4817]: I0314 06:16:50.428259 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jzxcw" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="registry-server" containerID="cri-o://c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099" gracePeriod=2 Mar 14 06:16:50 crc kubenswrapper[4817]: I0314 06:16:50.993410 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.129851 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-catalog-content\") pod \"92f95d48-2b39-4d7d-a073-e45824341e69\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.129915 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q946v\" (UniqueName: \"kubernetes.io/projected/92f95d48-2b39-4d7d-a073-e45824341e69-kube-api-access-q946v\") pod \"92f95d48-2b39-4d7d-a073-e45824341e69\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.129960 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-utilities\") pod \"92f95d48-2b39-4d7d-a073-e45824341e69\" (UID: \"92f95d48-2b39-4d7d-a073-e45824341e69\") " Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.131356 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-utilities" (OuterVolumeSpecName: "utilities") pod "92f95d48-2b39-4d7d-a073-e45824341e69" (UID: "92f95d48-2b39-4d7d-a073-e45824341e69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.138916 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f95d48-2b39-4d7d-a073-e45824341e69-kube-api-access-q946v" (OuterVolumeSpecName: "kube-api-access-q946v") pod "92f95d48-2b39-4d7d-a073-e45824341e69" (UID: "92f95d48-2b39-4d7d-a073-e45824341e69"). InnerVolumeSpecName "kube-api-access-q946v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.233085 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q946v\" (UniqueName: \"kubernetes.io/projected/92f95d48-2b39-4d7d-a073-e45824341e69-kube-api-access-q946v\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.233146 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.331503 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92f95d48-2b39-4d7d-a073-e45824341e69" (UID: "92f95d48-2b39-4d7d-a073-e45824341e69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.335498 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92f95d48-2b39-4d7d-a073-e45824341e69-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.444623 4817 generic.go:334] "Generic (PLEG): container finished" podID="92f95d48-2b39-4d7d-a073-e45824341e69" containerID="c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099" exitCode=0 Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.444682 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerDied","Data":"c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099"} Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.444726 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxcw" event={"ID":"92f95d48-2b39-4d7d-a073-e45824341e69","Type":"ContainerDied","Data":"96f198d727415ecd6236f2e62d1bcc4bbb4c5644b7951194b206e3f8e8f9532f"} Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.444765 4817 scope.go:117] "RemoveContainer" containerID="c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.444794 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxcw" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.476109 4817 scope.go:117] "RemoveContainer" containerID="a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.504092 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzxcw"] Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.514070 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jzxcw"] Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.526139 4817 scope.go:117] "RemoveContainer" containerID="9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.575579 4817 scope.go:117] "RemoveContainer" containerID="c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099" Mar 14 06:16:51 crc kubenswrapper[4817]: E0314 06:16:51.576494 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099\": container with ID starting with c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099 not found: ID does not exist" containerID="c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.576550 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099"} err="failed to get container status \"c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099\": rpc error: code = NotFound desc = could not find container \"c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099\": container with ID starting with c7f95ffe026644648d3e0305c12ab963a8b75eef94f308b6163a574d49f62099 not found: ID does not exist" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.576582 4817 scope.go:117] "RemoveContainer" containerID="a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e" Mar 14 06:16:51 crc kubenswrapper[4817]: E0314 06:16:51.576932 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e\": container with ID starting with a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e not found: ID does not exist" containerID="a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.576968 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e"} err="failed to get container status \"a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e\": rpc error: code = NotFound desc = could not find container \"a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e\": container with ID starting with a0eb266e490b57b3441dd730ed857d0f26629426c364b946b26ed8ee6e4b127e not found: ID does not exist" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.576987 4817 scope.go:117] "RemoveContainer" containerID="9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6" Mar 14 06:16:51 crc kubenswrapper[4817]: E0314 06:16:51.577267 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6\": container with ID starting with 9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6 not found: ID does not exist" containerID="9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6" Mar 14 06:16:51 crc kubenswrapper[4817]: I0314 06:16:51.577356 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6"} err="failed to get container status \"9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6\": rpc error: code = NotFound desc = could not find container \"9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6\": container with ID starting with 9e952abd811de29c19f2d31a3aee014a564f1bafce44a04da83c96ffa754f3e6 not found: ID does not exist" Mar 14 06:16:52 crc kubenswrapper[4817]: I0314 06:16:52.463045 4817 generic.go:334] "Generic (PLEG): container finished" podID="87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" containerID="8e847201ee99f20eea8e2ca8654e069cdf1250a6ef66b9cab8691ddeb18b2be1" exitCode=0 Mar 14 06:16:52 crc kubenswrapper[4817]: I0314 06:16:52.463108 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" event={"ID":"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91","Type":"ContainerDied","Data":"8e847201ee99f20eea8e2ca8654e069cdf1250a6ef66b9cab8691ddeb18b2be1"} Mar 14 06:16:52 crc kubenswrapper[4817]: I0314 06:16:52.752033 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" path="/var/lib/kubelet/pods/92f95d48-2b39-4d7d-a073-e45824341e69/volumes" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.004191 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.107474 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-ovn-metadata-agent-neutron-config-0\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.107537 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ssh-key-openstack-edpm-ipam\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.107787 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-metadata-combined-ca-bundle\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.107870 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ceph\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.107999 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-inventory\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.108066 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-nova-metadata-neutron-config-0\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.108118 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phqq6\" (UniqueName: \"kubernetes.io/projected/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-kube-api-access-phqq6\") pod \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\" (UID: \"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91\") " Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.114125 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.114417 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ceph" (OuterVolumeSpecName: "ceph") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.123377 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-kube-api-access-phqq6" (OuterVolumeSpecName: "kube-api-access-phqq6") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "kube-api-access-phqq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.136967 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.137469 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.138803 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.146400 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-inventory" (OuterVolumeSpecName: "inventory") pod "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" (UID: "87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211636 4817 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211687 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211703 4817 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211720 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211735 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211749 4817 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.211762 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phqq6\" (UniqueName: \"kubernetes.io/projected/87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91-kube-api-access-phqq6\") on node \"crc\" DevicePath \"\"" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.482695 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" event={"ID":"87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91","Type":"ContainerDied","Data":"727d36f2ba4f6cef03db45a318a1f297293758ebef964508000636416017fa09"} Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.482789 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="727d36f2ba4f6cef03db45a318a1f297293758ebef964508000636416017fa09" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.482917 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.711617 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn"] Mar 14 06:16:54 crc kubenswrapper[4817]: E0314 06:16:54.712068 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712089 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 14 06:16:54 crc kubenswrapper[4817]: E0314 06:16:54.712124 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="registry-server" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712131 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="registry-server" Mar 14 06:16:54 crc kubenswrapper[4817]: E0314 06:16:54.712144 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="extract-utilities" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712152 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="extract-utilities" Mar 14 06:16:54 crc kubenswrapper[4817]: E0314 06:16:54.712161 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="extract-content" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712168 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="extract-content" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712339 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f95d48-2b39-4d7d-a073-e45824341e69" containerName="registry-server" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712348 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.712986 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.715167 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.715467 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.715795 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.716060 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.717379 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.728807 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.757539 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn"] Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.822201 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5cj\" (UniqueName: \"kubernetes.io/projected/921c813c-e71f-4a7b-b74c-a389c71e1d4f-kube-api-access-ts5cj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.822259 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.822311 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.822341 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.822408 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.822441 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.924771 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.924841 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.924991 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.925070 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.925114 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts5cj\" (UniqueName: \"kubernetes.io/projected/921c813c-e71f-4a7b-b74c-a389c71e1d4f-kube-api-access-ts5cj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.925142 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.930702 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.931293 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.933821 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.940791 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.941731 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:54 crc kubenswrapper[4817]: I0314 06:16:54.944416 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts5cj\" (UniqueName: \"kubernetes.io/projected/921c813c-e71f-4a7b-b74c-a389c71e1d4f-kube-api-access-ts5cj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:55 crc kubenswrapper[4817]: I0314 06:16:55.035775 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:16:55 crc kubenswrapper[4817]: W0314 06:16:55.592246 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod921c813c_e71f_4a7b_b74c_a389c71e1d4f.slice/crio-7fc0f65baae56b67b2775f0eef66f420ed35e8054816063a1ae6a1ddfee8dc5e WatchSource:0}: Error finding container 7fc0f65baae56b67b2775f0eef66f420ed35e8054816063a1ae6a1ddfee8dc5e: Status 404 returned error can't find the container with id 7fc0f65baae56b67b2775f0eef66f420ed35e8054816063a1ae6a1ddfee8dc5e Mar 14 06:16:55 crc kubenswrapper[4817]: I0314 06:16:55.592365 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn"] Mar 14 06:16:56 crc kubenswrapper[4817]: I0314 06:16:56.505618 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" event={"ID":"921c813c-e71f-4a7b-b74c-a389c71e1d4f","Type":"ContainerStarted","Data":"930c4f78c343a7f862f7eaacc2b772c58a5ace89436b2aa903219654d7773c57"} Mar 14 06:16:56 crc kubenswrapper[4817]: I0314 06:16:56.506039 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" event={"ID":"921c813c-e71f-4a7b-b74c-a389c71e1d4f","Type":"ContainerStarted","Data":"7fc0f65baae56b67b2775f0eef66f420ed35e8054816063a1ae6a1ddfee8dc5e"} Mar 14 06:16:56 crc kubenswrapper[4817]: I0314 06:16:56.539481 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" podStartSLOduration=2.119487794 podStartE2EDuration="2.539450957s" podCreationTimestamp="2026-03-14 06:16:54 +0000 UTC" firstStartedPulling="2026-03-14 06:16:55.596318033 +0000 UTC m=+2669.634578789" lastFinishedPulling="2026-03-14 06:16:56.016281196 +0000 UTC m=+2670.054541952" observedRunningTime="2026-03-14 06:16:56.53076823 +0000 UTC m=+2670.569028996" watchObservedRunningTime="2026-03-14 06:16:56.539450957 +0000 UTC m=+2670.577711713" Mar 14 06:17:00 crc kubenswrapper[4817]: I0314 06:17:00.835208 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:17:00 crc kubenswrapper[4817]: E0314 06:17:00.835924 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:17:14 crc kubenswrapper[4817]: I0314 06:17:14.732190 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:17:15 crc kubenswrapper[4817]: I0314 06:17:15.313154 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"ae702fc959da5668d0a32e1c36194134963e06fd40bf87f3f21cc52386ca0ffe"} Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.181962 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557818-m6dtc"] Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.184119 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.192017 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.192254 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.196243 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.260340 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557818-m6dtc"] Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.277094 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnhb\" (UniqueName: \"kubernetes.io/projected/1911b834-0d38-466a-9549-fce55d991182-kube-api-access-9pnhb\") pod \"auto-csr-approver-29557818-m6dtc\" (UID: \"1911b834-0d38-466a-9549-fce55d991182\") " pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.382302 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pnhb\" (UniqueName: \"kubernetes.io/projected/1911b834-0d38-466a-9549-fce55d991182-kube-api-access-9pnhb\") pod \"auto-csr-approver-29557818-m6dtc\" (UID: \"1911b834-0d38-466a-9549-fce55d991182\") " pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.410726 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pnhb\" (UniqueName: \"kubernetes.io/projected/1911b834-0d38-466a-9549-fce55d991182-kube-api-access-9pnhb\") pod \"auto-csr-approver-29557818-m6dtc\" (UID: \"1911b834-0d38-466a-9549-fce55d991182\") " pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:00 crc kubenswrapper[4817]: I0314 06:18:00.551837 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:01 crc kubenswrapper[4817]: I0314 06:18:01.060232 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557818-m6dtc"] Mar 14 06:18:01 crc kubenswrapper[4817]: I0314 06:18:01.794728 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" event={"ID":"1911b834-0d38-466a-9549-fce55d991182","Type":"ContainerStarted","Data":"801e8d4b0a6191d124b98c45fc61372ab1496f042f68236a40df9e0e8da3befa"} Mar 14 06:18:02 crc kubenswrapper[4817]: I0314 06:18:02.805209 4817 generic.go:334] "Generic (PLEG): container finished" podID="1911b834-0d38-466a-9549-fce55d991182" containerID="42b056da6dd99e06e19ebb353029d0ff54b02b1f982f84c358b0a02ddfaba5ba" exitCode=0 Mar 14 06:18:02 crc kubenswrapper[4817]: I0314 06:18:02.805326 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" event={"ID":"1911b834-0d38-466a-9549-fce55d991182","Type":"ContainerDied","Data":"42b056da6dd99e06e19ebb353029d0ff54b02b1f982f84c358b0a02ddfaba5ba"} Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.233130 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.269039 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pnhb\" (UniqueName: \"kubernetes.io/projected/1911b834-0d38-466a-9549-fce55d991182-kube-api-access-9pnhb\") pod \"1911b834-0d38-466a-9549-fce55d991182\" (UID: \"1911b834-0d38-466a-9549-fce55d991182\") " Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.279189 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1911b834-0d38-466a-9549-fce55d991182-kube-api-access-9pnhb" (OuterVolumeSpecName: "kube-api-access-9pnhb") pod "1911b834-0d38-466a-9549-fce55d991182" (UID: "1911b834-0d38-466a-9549-fce55d991182"). InnerVolumeSpecName "kube-api-access-9pnhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.372590 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pnhb\" (UniqueName: \"kubernetes.io/projected/1911b834-0d38-466a-9549-fce55d991182-kube-api-access-9pnhb\") on node \"crc\" DevicePath \"\"" Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.824399 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" event={"ID":"1911b834-0d38-466a-9549-fce55d991182","Type":"ContainerDied","Data":"801e8d4b0a6191d124b98c45fc61372ab1496f042f68236a40df9e0e8da3befa"} Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.824445 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="801e8d4b0a6191d124b98c45fc61372ab1496f042f68236a40df9e0e8da3befa" Mar 14 06:18:04 crc kubenswrapper[4817]: I0314 06:18:04.824478 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557818-m6dtc" Mar 14 06:18:05 crc kubenswrapper[4817]: I0314 06:18:05.307458 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557812-dj9s9"] Mar 14 06:18:05 crc kubenswrapper[4817]: I0314 06:18:05.315324 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557812-dj9s9"] Mar 14 06:18:06 crc kubenswrapper[4817]: I0314 06:18:06.743738 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf497723-2086-4244-a00d-636a8e10b54c" path="/var/lib/kubelet/pods/bf497723-2086-4244-a00d-636a8e10b54c/volumes" Mar 14 06:18:45 crc kubenswrapper[4817]: I0314 06:18:45.224587 4817 scope.go:117] "RemoveContainer" containerID="ed78141dfd235e323d6cdae005a1ef3f9b3f34758caaf7ae3f4a6283b7333090" Mar 14 06:19:38 crc kubenswrapper[4817]: I0314 06:19:38.566149 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:19:38 crc kubenswrapper[4817]: I0314 06:19:38.566733 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.151863 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557820-v496p"] Mar 14 06:20:00 crc kubenswrapper[4817]: E0314 06:20:00.152945 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1911b834-0d38-466a-9549-fce55d991182" containerName="oc" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.152963 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1911b834-0d38-466a-9549-fce55d991182" containerName="oc" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.153213 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1911b834-0d38-466a-9549-fce55d991182" containerName="oc" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.154021 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.158485 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.158833 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.159022 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.166328 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557820-v496p"] Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.283464 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czld\" (UniqueName: \"kubernetes.io/projected/adc6ec74-28c9-4db4-bb37-39938658d717-kube-api-access-6czld\") pod \"auto-csr-approver-29557820-v496p\" (UID: \"adc6ec74-28c9-4db4-bb37-39938658d717\") " pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.386082 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czld\" (UniqueName: \"kubernetes.io/projected/adc6ec74-28c9-4db4-bb37-39938658d717-kube-api-access-6czld\") pod \"auto-csr-approver-29557820-v496p\" (UID: \"adc6ec74-28c9-4db4-bb37-39938658d717\") " pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.416955 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czld\" (UniqueName: \"kubernetes.io/projected/adc6ec74-28c9-4db4-bb37-39938658d717-kube-api-access-6czld\") pod \"auto-csr-approver-29557820-v496p\" (UID: \"adc6ec74-28c9-4db4-bb37-39938658d717\") " pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:00 crc kubenswrapper[4817]: I0314 06:20:00.496960 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:01 crc kubenswrapper[4817]: I0314 06:20:01.020088 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557820-v496p"] Mar 14 06:20:01 crc kubenswrapper[4817]: I0314 06:20:01.045958 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:20:01 crc kubenswrapper[4817]: I0314 06:20:01.067036 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557820-v496p" event={"ID":"adc6ec74-28c9-4db4-bb37-39938658d717","Type":"ContainerStarted","Data":"123f2d620e3a4ea292b7e302b44e3f7a65a8967601f4faf393e9e95dc6e6830e"} Mar 14 06:20:03 crc kubenswrapper[4817]: I0314 06:20:03.095541 4817 generic.go:334] "Generic (PLEG): container finished" podID="adc6ec74-28c9-4db4-bb37-39938658d717" containerID="353dc8d0033012041145f11991d43acc004ebc42960283d8b01725ba2b903825" exitCode=0 Mar 14 06:20:03 crc kubenswrapper[4817]: I0314 06:20:03.095675 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557820-v496p" event={"ID":"adc6ec74-28c9-4db4-bb37-39938658d717","Type":"ContainerDied","Data":"353dc8d0033012041145f11991d43acc004ebc42960283d8b01725ba2b903825"} Mar 14 06:20:04 crc kubenswrapper[4817]: I0314 06:20:04.458847 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:04 crc kubenswrapper[4817]: I0314 06:20:04.570355 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6czld\" (UniqueName: \"kubernetes.io/projected/adc6ec74-28c9-4db4-bb37-39938658d717-kube-api-access-6czld\") pod \"adc6ec74-28c9-4db4-bb37-39938658d717\" (UID: \"adc6ec74-28c9-4db4-bb37-39938658d717\") " Mar 14 06:20:04 crc kubenswrapper[4817]: I0314 06:20:04.576322 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc6ec74-28c9-4db4-bb37-39938658d717-kube-api-access-6czld" (OuterVolumeSpecName: "kube-api-access-6czld") pod "adc6ec74-28c9-4db4-bb37-39938658d717" (UID: "adc6ec74-28c9-4db4-bb37-39938658d717"). InnerVolumeSpecName "kube-api-access-6czld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:20:04 crc kubenswrapper[4817]: I0314 06:20:04.672797 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6czld\" (UniqueName: \"kubernetes.io/projected/adc6ec74-28c9-4db4-bb37-39938658d717-kube-api-access-6czld\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.114124 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557820-v496p" event={"ID":"adc6ec74-28c9-4db4-bb37-39938658d717","Type":"ContainerDied","Data":"123f2d620e3a4ea292b7e302b44e3f7a65a8967601f4faf393e9e95dc6e6830e"} Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.114182 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123f2d620e3a4ea292b7e302b44e3f7a65a8967601f4faf393e9e95dc6e6830e" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.114201 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557820-v496p" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.553401 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557814-npvc7"] Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.565627 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557814-npvc7"] Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.969703 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8b79"] Mar 14 06:20:05 crc kubenswrapper[4817]: E0314 06:20:05.970275 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6ec74-28c9-4db4-bb37-39938658d717" containerName="oc" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.970300 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6ec74-28c9-4db4-bb37-39938658d717" containerName="oc" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.970561 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc6ec74-28c9-4db4-bb37-39938658d717" containerName="oc" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.972572 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:05 crc kubenswrapper[4817]: I0314 06:20:05.997371 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8b79"] Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.100518 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-catalog-content\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.100703 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-utilities\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.101104 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpjjq\" (UniqueName: \"kubernetes.io/projected/c73f6175-582b-4374-b2c5-e63552b98cd2-kube-api-access-fpjjq\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.157315 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-67h2x"] Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.159824 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.172441 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67h2x"] Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.203457 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-catalog-content\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.203915 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-utilities\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.204029 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpjjq\" (UniqueName: \"kubernetes.io/projected/c73f6175-582b-4374-b2c5-e63552b98cd2-kube-api-access-fpjjq\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.204256 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-catalog-content\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.204663 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-utilities\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.238889 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpjjq\" (UniqueName: \"kubernetes.io/projected/c73f6175-582b-4374-b2c5-e63552b98cd2-kube-api-access-fpjjq\") pod \"redhat-marketplace-j8b79\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.304814 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.305971 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-utilities\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.306322 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-catalog-content\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.306480 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4pn\" (UniqueName: \"kubernetes.io/projected/756b8f2a-768a-4198-ac30-6dcc15497542-kube-api-access-tc4pn\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.408808 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-utilities\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.408924 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-catalog-content\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.408977 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4pn\" (UniqueName: \"kubernetes.io/projected/756b8f2a-768a-4198-ac30-6dcc15497542-kube-api-access-tc4pn\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.409425 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-utilities\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.409629 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-catalog-content\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.461789 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4pn\" (UniqueName: \"kubernetes.io/projected/756b8f2a-768a-4198-ac30-6dcc15497542-kube-api-access-tc4pn\") pod \"certified-operators-67h2x\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.479127 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.636246 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8b79"] Mar 14 06:20:06 crc kubenswrapper[4817]: W0314 06:20:06.655072 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc73f6175_582b_4374_b2c5_e63552b98cd2.slice/crio-864b05eb170ebff0466449d1554868221cb4ed22116eaa611420ad3cf85b66e2 WatchSource:0}: Error finding container 864b05eb170ebff0466449d1554868221cb4ed22116eaa611420ad3cf85b66e2: Status 404 returned error can't find the container with id 864b05eb170ebff0466449d1554868221cb4ed22116eaa611420ad3cf85b66e2 Mar 14 06:20:06 crc kubenswrapper[4817]: I0314 06:20:06.759827 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f260e502-12c6-4563-980d-06ca27c3f78c" path="/var/lib/kubelet/pods/f260e502-12c6-4563-980d-06ca27c3f78c/volumes" Mar 14 06:20:07 crc kubenswrapper[4817]: I0314 06:20:07.033079 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-67h2x"] Mar 14 06:20:07 crc kubenswrapper[4817]: W0314 06:20:07.043094 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod756b8f2a_768a_4198_ac30_6dcc15497542.slice/crio-34108da97b075550a61befba8fbf40d3f3bcdf7b387ce55e82730a104fd00257 WatchSource:0}: Error finding container 34108da97b075550a61befba8fbf40d3f3bcdf7b387ce55e82730a104fd00257: Status 404 returned error can't find the container with id 34108da97b075550a61befba8fbf40d3f3bcdf7b387ce55e82730a104fd00257 Mar 14 06:20:07 crc kubenswrapper[4817]: I0314 06:20:07.137793 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerStarted","Data":"34108da97b075550a61befba8fbf40d3f3bcdf7b387ce55e82730a104fd00257"} Mar 14 06:20:07 crc kubenswrapper[4817]: I0314 06:20:07.141868 4817 generic.go:334] "Generic (PLEG): container finished" podID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerID="5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b" exitCode=0 Mar 14 06:20:07 crc kubenswrapper[4817]: I0314 06:20:07.142061 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8b79" event={"ID":"c73f6175-582b-4374-b2c5-e63552b98cd2","Type":"ContainerDied","Data":"5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b"} Mar 14 06:20:07 crc kubenswrapper[4817]: I0314 06:20:07.142151 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8b79" event={"ID":"c73f6175-582b-4374-b2c5-e63552b98cd2","Type":"ContainerStarted","Data":"864b05eb170ebff0466449d1554868221cb4ed22116eaa611420ad3cf85b66e2"} Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.152400 4817 generic.go:334] "Generic (PLEG): container finished" podID="756b8f2a-768a-4198-ac30-6dcc15497542" containerID="db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d" exitCode=0 Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.152482 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerDied","Data":"db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d"} Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.157547 4817 generic.go:334] "Generic (PLEG): container finished" podID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerID="abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66" exitCode=0 Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.157624 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8b79" event={"ID":"c73f6175-582b-4374-b2c5-e63552b98cd2","Type":"ContainerDied","Data":"abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66"} Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.563537 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-75895"] Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.565768 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.565880 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.566744 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.572599 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75895"] Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.622511 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-catalog-content\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.622866 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8n2\" (UniqueName: \"kubernetes.io/projected/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-kube-api-access-rp8n2\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.623278 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-utilities\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.725630 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-catalog-content\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.726121 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8n2\" (UniqueName: \"kubernetes.io/projected/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-kube-api-access-rp8n2\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.726168 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-catalog-content\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.726229 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-utilities\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.726606 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-utilities\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.757840 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8n2\" (UniqueName: \"kubernetes.io/projected/2d671e4a-c87a-43b1-9bf6-bec660b13dc4-kube-api-access-rp8n2\") pod \"community-operators-75895\" (UID: \"2d671e4a-c87a-43b1-9bf6-bec660b13dc4\") " pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:08 crc kubenswrapper[4817]: I0314 06:20:08.903315 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:09 crc kubenswrapper[4817]: I0314 06:20:09.187857 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8b79" event={"ID":"c73f6175-582b-4374-b2c5-e63552b98cd2","Type":"ContainerStarted","Data":"69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6"} Mar 14 06:20:09 crc kubenswrapper[4817]: I0314 06:20:09.205143 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerStarted","Data":"50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6"} Mar 14 06:20:09 crc kubenswrapper[4817]: I0314 06:20:09.233323 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8b79" podStartSLOduration=2.809039151 podStartE2EDuration="4.233278905s" podCreationTimestamp="2026-03-14 06:20:05 +0000 UTC" firstStartedPulling="2026-03-14 06:20:07.143930241 +0000 UTC m=+2861.182190987" lastFinishedPulling="2026-03-14 06:20:08.568169985 +0000 UTC m=+2862.606430741" observedRunningTime="2026-03-14 06:20:09.223278831 +0000 UTC m=+2863.261539587" watchObservedRunningTime="2026-03-14 06:20:09.233278905 +0000 UTC m=+2863.271539651" Mar 14 06:20:09 crc kubenswrapper[4817]: I0314 06:20:09.454438 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75895"] Mar 14 06:20:09 crc kubenswrapper[4817]: W0314 06:20:09.475438 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d671e4a_c87a_43b1_9bf6_bec660b13dc4.slice/crio-6b2a8879d13e5ab39243d33bf104aea2b0b9278ae2bea598cd344ca017fdcebd WatchSource:0}: Error finding container 6b2a8879d13e5ab39243d33bf104aea2b0b9278ae2bea598cd344ca017fdcebd: Status 404 returned error can't find the container with id 6b2a8879d13e5ab39243d33bf104aea2b0b9278ae2bea598cd344ca017fdcebd Mar 14 06:20:10 crc kubenswrapper[4817]: I0314 06:20:10.222948 4817 generic.go:334] "Generic (PLEG): container finished" podID="2d671e4a-c87a-43b1-9bf6-bec660b13dc4" containerID="eb9328636fed01e62f3ad3a85b8c89477733664d8d47497e2b901965615511e5" exitCode=0 Mar 14 06:20:10 crc kubenswrapper[4817]: I0314 06:20:10.223032 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75895" event={"ID":"2d671e4a-c87a-43b1-9bf6-bec660b13dc4","Type":"ContainerDied","Data":"eb9328636fed01e62f3ad3a85b8c89477733664d8d47497e2b901965615511e5"} Mar 14 06:20:10 crc kubenswrapper[4817]: I0314 06:20:10.223066 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75895" event={"ID":"2d671e4a-c87a-43b1-9bf6-bec660b13dc4","Type":"ContainerStarted","Data":"6b2a8879d13e5ab39243d33bf104aea2b0b9278ae2bea598cd344ca017fdcebd"} Mar 14 06:20:10 crc kubenswrapper[4817]: I0314 06:20:10.242247 4817 generic.go:334] "Generic (PLEG): container finished" podID="756b8f2a-768a-4198-ac30-6dcc15497542" containerID="50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6" exitCode=0 Mar 14 06:20:10 crc kubenswrapper[4817]: I0314 06:20:10.242420 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerDied","Data":"50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6"} Mar 14 06:20:11 crc kubenswrapper[4817]: I0314 06:20:11.253543 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerStarted","Data":"365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a"} Mar 14 06:20:11 crc kubenswrapper[4817]: I0314 06:20:11.287306 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-67h2x" podStartSLOduration=2.731902287 podStartE2EDuration="5.287280237s" podCreationTimestamp="2026-03-14 06:20:06 +0000 UTC" firstStartedPulling="2026-03-14 06:20:08.155163884 +0000 UTC m=+2862.193424630" lastFinishedPulling="2026-03-14 06:20:10.710541824 +0000 UTC m=+2864.748802580" observedRunningTime="2026-03-14 06:20:11.27717012 +0000 UTC m=+2865.315430876" watchObservedRunningTime="2026-03-14 06:20:11.287280237 +0000 UTC m=+2865.325540993" Mar 14 06:20:15 crc kubenswrapper[4817]: I0314 06:20:15.299676 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75895" event={"ID":"2d671e4a-c87a-43b1-9bf6-bec660b13dc4","Type":"ContainerStarted","Data":"3e8d484a2862178654727d2dddbef85777ed687c226eae54952cbcf1344353de"} Mar 14 06:20:15 crc kubenswrapper[4817]: E0314 06:20:15.634108 4817 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d671e4a_c87a_43b1_9bf6_bec660b13dc4.slice/crio-conmon-3e8d484a2862178654727d2dddbef85777ed687c226eae54952cbcf1344353de.scope\": RecentStats: unable to find data in memory cache]" Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.305325 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.305653 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.310233 4817 generic.go:334] "Generic (PLEG): container finished" podID="2d671e4a-c87a-43b1-9bf6-bec660b13dc4" containerID="3e8d484a2862178654727d2dddbef85777ed687c226eae54952cbcf1344353de" exitCode=0 Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.310274 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75895" event={"ID":"2d671e4a-c87a-43b1-9bf6-bec660b13dc4","Type":"ContainerDied","Data":"3e8d484a2862178654727d2dddbef85777ed687c226eae54952cbcf1344353de"} Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.365806 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.479844 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.479983 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:16 crc kubenswrapper[4817]: I0314 06:20:16.531249 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:17 crc kubenswrapper[4817]: I0314 06:20:17.339145 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75895" event={"ID":"2d671e4a-c87a-43b1-9bf6-bec660b13dc4","Type":"ContainerStarted","Data":"c2f630a064da1fd8fceaa20f0615ba3a31445b3169e19ab14f75d9858cc9203d"} Mar 14 06:20:17 crc kubenswrapper[4817]: I0314 06:20:17.376858 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-75895" podStartSLOduration=2.850606296 podStartE2EDuration="9.376822829s" podCreationTimestamp="2026-03-14 06:20:08 +0000 UTC" firstStartedPulling="2026-03-14 06:20:10.225584153 +0000 UTC m=+2864.263844899" lastFinishedPulling="2026-03-14 06:20:16.751800686 +0000 UTC m=+2870.790061432" observedRunningTime="2026-03-14 06:20:17.364921472 +0000 UTC m=+2871.403182228" watchObservedRunningTime="2026-03-14 06:20:17.376822829 +0000 UTC m=+2871.415083615" Mar 14 06:20:17 crc kubenswrapper[4817]: I0314 06:20:17.399136 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:17 crc kubenswrapper[4817]: I0314 06:20:17.434544 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:18 crc kubenswrapper[4817]: I0314 06:20:18.355711 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8b79"] Mar 14 06:20:18 crc kubenswrapper[4817]: I0314 06:20:18.904476 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:18 crc kubenswrapper[4817]: I0314 06:20:18.905056 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:18 crc kubenswrapper[4817]: I0314 06:20:18.966432 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-67h2x"] Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.358080 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-67h2x" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="registry-server" containerID="cri-o://365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a" gracePeriod=2 Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.358193 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8b79" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="registry-server" containerID="cri-o://69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6" gracePeriod=2 Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.951906 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.952136 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.966179 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-utilities\") pod \"756b8f2a-768a-4198-ac30-6dcc15497542\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.966253 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-utilities\") pod \"c73f6175-582b-4374-b2c5-e63552b98cd2\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.966351 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpjjq\" (UniqueName: \"kubernetes.io/projected/c73f6175-582b-4374-b2c5-e63552b98cd2-kube-api-access-fpjjq\") pod \"c73f6175-582b-4374-b2c5-e63552b98cd2\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.966474 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc4pn\" (UniqueName: \"kubernetes.io/projected/756b8f2a-768a-4198-ac30-6dcc15497542-kube-api-access-tc4pn\") pod \"756b8f2a-768a-4198-ac30-6dcc15497542\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.966552 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-catalog-content\") pod \"756b8f2a-768a-4198-ac30-6dcc15497542\" (UID: \"756b8f2a-768a-4198-ac30-6dcc15497542\") " Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.966592 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-catalog-content\") pod \"c73f6175-582b-4374-b2c5-e63552b98cd2\" (UID: \"c73f6175-582b-4374-b2c5-e63552b98cd2\") " Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.968172 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-utilities" (OuterVolumeSpecName: "utilities") pod "c73f6175-582b-4374-b2c5-e63552b98cd2" (UID: "c73f6175-582b-4374-b2c5-e63552b98cd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.974438 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756b8f2a-768a-4198-ac30-6dcc15497542-kube-api-access-tc4pn" (OuterVolumeSpecName: "kube-api-access-tc4pn") pod "756b8f2a-768a-4198-ac30-6dcc15497542" (UID: "756b8f2a-768a-4198-ac30-6dcc15497542"). InnerVolumeSpecName "kube-api-access-tc4pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.974843 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73f6175-582b-4374-b2c5-e63552b98cd2-kube-api-access-fpjjq" (OuterVolumeSpecName: "kube-api-access-fpjjq") pod "c73f6175-582b-4374-b2c5-e63552b98cd2" (UID: "c73f6175-582b-4374-b2c5-e63552b98cd2"). InnerVolumeSpecName "kube-api-access-fpjjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.990870 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-utilities" (OuterVolumeSpecName: "utilities") pod "756b8f2a-768a-4198-ac30-6dcc15497542" (UID: "756b8f2a-768a-4198-ac30-6dcc15497542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:20:19 crc kubenswrapper[4817]: I0314 06:20:19.994181 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-75895" podUID="2d671e4a-c87a-43b1-9bf6-bec660b13dc4" containerName="registry-server" probeResult="failure" output=< Mar 14 06:20:19 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 06:20:19 crc kubenswrapper[4817]: > Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.021412 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c73f6175-582b-4374-b2c5-e63552b98cd2" (UID: "c73f6175-582b-4374-b2c5-e63552b98cd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.051446 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "756b8f2a-768a-4198-ac30-6dcc15497542" (UID: "756b8f2a-768a-4198-ac30-6dcc15497542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.069064 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpjjq\" (UniqueName: \"kubernetes.io/projected/c73f6175-582b-4374-b2c5-e63552b98cd2-kube-api-access-fpjjq\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.069179 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc4pn\" (UniqueName: \"kubernetes.io/projected/756b8f2a-768a-4198-ac30-6dcc15497542-kube-api-access-tc4pn\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.069196 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.069209 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.069220 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/756b8f2a-768a-4198-ac30-6dcc15497542-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.069230 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c73f6175-582b-4374-b2c5-e63552b98cd2-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.368693 4817 generic.go:334] "Generic (PLEG): container finished" podID="756b8f2a-768a-4198-ac30-6dcc15497542" containerID="365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a" exitCode=0 Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.368747 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-67h2x" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.368770 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerDied","Data":"365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a"} Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.368808 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-67h2x" event={"ID":"756b8f2a-768a-4198-ac30-6dcc15497542","Type":"ContainerDied","Data":"34108da97b075550a61befba8fbf40d3f3bcdf7b387ce55e82730a104fd00257"} Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.368832 4817 scope.go:117] "RemoveContainer" containerID="365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.372511 4817 generic.go:334] "Generic (PLEG): container finished" podID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerID="69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6" exitCode=0 Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.372622 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8b79" event={"ID":"c73f6175-582b-4374-b2c5-e63552b98cd2","Type":"ContainerDied","Data":"69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6"} Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.372639 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8b79" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.372660 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8b79" event={"ID":"c73f6175-582b-4374-b2c5-e63552b98cd2","Type":"ContainerDied","Data":"864b05eb170ebff0466449d1554868221cb4ed22116eaa611420ad3cf85b66e2"} Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.396052 4817 scope.go:117] "RemoveContainer" containerID="50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.412697 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-67h2x"] Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.421605 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-67h2x"] Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.433491 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8b79"] Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.440701 4817 scope.go:117] "RemoveContainer" containerID="db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.441332 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8b79"] Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.461056 4817 scope.go:117] "RemoveContainer" containerID="365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a" Mar 14 06:20:20 crc kubenswrapper[4817]: E0314 06:20:20.461531 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a\": container with ID starting with 365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a not found: ID does not exist" containerID="365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.461608 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a"} err="failed to get container status \"365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a\": rpc error: code = NotFound desc = could not find container \"365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a\": container with ID starting with 365b422e0f196ff02f672876b89ac91a5c05e95d5034efe7b2699939241da62a not found: ID does not exist" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.461653 4817 scope.go:117] "RemoveContainer" containerID="50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6" Mar 14 06:20:20 crc kubenswrapper[4817]: E0314 06:20:20.462399 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6\": container with ID starting with 50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6 not found: ID does not exist" containerID="50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.462437 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6"} err="failed to get container status \"50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6\": rpc error: code = NotFound desc = could not find container \"50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6\": container with ID starting with 50e364e1ab50528f3e195c78747d37f0025465a95ae62978322a0b79e33c50d6 not found: ID does not exist" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.462456 4817 scope.go:117] "RemoveContainer" containerID="db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d" Mar 14 06:20:20 crc kubenswrapper[4817]: E0314 06:20:20.462763 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d\": container with ID starting with db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d not found: ID does not exist" containerID="db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.462844 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d"} err="failed to get container status \"db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d\": rpc error: code = NotFound desc = could not find container \"db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d\": container with ID starting with db2bffdca70e44df9e029ec76f9ae8c17e09f2eb57376c3c55f2a1723fc9808d not found: ID does not exist" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.462873 4817 scope.go:117] "RemoveContainer" containerID="69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.482919 4817 scope.go:117] "RemoveContainer" containerID="abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.504895 4817 scope.go:117] "RemoveContainer" containerID="5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.532283 4817 scope.go:117] "RemoveContainer" containerID="69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6" Mar 14 06:20:20 crc kubenswrapper[4817]: E0314 06:20:20.533152 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6\": container with ID starting with 69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6 not found: ID does not exist" containerID="69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.533318 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6"} err="failed to get container status \"69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6\": rpc error: code = NotFound desc = could not find container \"69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6\": container with ID starting with 69223128c1a2d1845b5075883f658268cebb4231ca19377d234a23a977eab5c6 not found: ID does not exist" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.533342 4817 scope.go:117] "RemoveContainer" containerID="abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66" Mar 14 06:20:20 crc kubenswrapper[4817]: E0314 06:20:20.533662 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66\": container with ID starting with abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66 not found: ID does not exist" containerID="abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.533679 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66"} err="failed to get container status \"abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66\": rpc error: code = NotFound desc = could not find container \"abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66\": container with ID starting with abc97257c7651baf5bc41aad0e930e1a7f14b1b511269649ecc03fc1fa119f66 not found: ID does not exist" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.533693 4817 scope.go:117] "RemoveContainer" containerID="5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b" Mar 14 06:20:20 crc kubenswrapper[4817]: E0314 06:20:20.534242 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b\": container with ID starting with 5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b not found: ID does not exist" containerID="5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.534263 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b"} err="failed to get container status \"5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b\": rpc error: code = NotFound desc = could not find container \"5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b\": container with ID starting with 5c16a6af8d3e42e9335fc68ff894042839bf008d69eba466151adc975716af5b not found: ID does not exist" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.760720 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" path="/var/lib/kubelet/pods/756b8f2a-768a-4198-ac30-6dcc15497542/volumes" Mar 14 06:20:20 crc kubenswrapper[4817]: I0314 06:20:20.762547 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" path="/var/lib/kubelet/pods/c73f6175-582b-4374-b2c5-e63552b98cd2/volumes" Mar 14 06:20:28 crc kubenswrapper[4817]: I0314 06:20:28.956959 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.018704 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-75895" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.105851 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75895"] Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.205879 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wm5jk"] Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.206144 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wm5jk" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="registry-server" containerID="cri-o://8a658da2278dcbf320365cfd4199aa58534c65c5a4827492fa7037b23195b5eb" gracePeriod=2 Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.474925 4817 generic.go:334] "Generic (PLEG): container finished" podID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerID="8a658da2278dcbf320365cfd4199aa58534c65c5a4827492fa7037b23195b5eb" exitCode=0 Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.474932 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm5jk" event={"ID":"5b9c285c-e272-4976-b90a-cbca8c3c1c28","Type":"ContainerDied","Data":"8a658da2278dcbf320365cfd4199aa58534c65c5a4827492fa7037b23195b5eb"} Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.621489 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.778477 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-catalog-content\") pod \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.778533 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h5ww\" (UniqueName: \"kubernetes.io/projected/5b9c285c-e272-4976-b90a-cbca8c3c1c28-kube-api-access-9h5ww\") pod \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.778753 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-utilities\") pod \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\" (UID: \"5b9c285c-e272-4976-b90a-cbca8c3c1c28\") " Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.779424 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-utilities" (OuterVolumeSpecName: "utilities") pod "5b9c285c-e272-4976-b90a-cbca8c3c1c28" (UID: "5b9c285c-e272-4976-b90a-cbca8c3c1c28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.786113 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9c285c-e272-4976-b90a-cbca8c3c1c28-kube-api-access-9h5ww" (OuterVolumeSpecName: "kube-api-access-9h5ww") pod "5b9c285c-e272-4976-b90a-cbca8c3c1c28" (UID: "5b9c285c-e272-4976-b90a-cbca8c3c1c28"). InnerVolumeSpecName "kube-api-access-9h5ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.833255 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b9c285c-e272-4976-b90a-cbca8c3c1c28" (UID: "5b9c285c-e272-4976-b90a-cbca8c3c1c28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.880739 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.880771 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h5ww\" (UniqueName: \"kubernetes.io/projected/5b9c285c-e272-4976-b90a-cbca8c3c1c28-kube-api-access-9h5ww\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:29 crc kubenswrapper[4817]: I0314 06:20:29.880783 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9c285c-e272-4976-b90a-cbca8c3c1c28-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.487178 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm5jk" event={"ID":"5b9c285c-e272-4976-b90a-cbca8c3c1c28","Type":"ContainerDied","Data":"6b977ab09b72f1d73d6b62c664f882f1995cb61bb886fcae6f8c20b66e4a25f3"} Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.487211 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm5jk" Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.487652 4817 scope.go:117] "RemoveContainer" containerID="8a658da2278dcbf320365cfd4199aa58534c65c5a4827492fa7037b23195b5eb" Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.509832 4817 scope.go:117] "RemoveContainer" containerID="d9146b43fc22b393fef21c40017b8713146ae509e0b9bca2ec3ff7c8878058a8" Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.523462 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wm5jk"] Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.532196 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wm5jk"] Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.547861 4817 scope.go:117] "RemoveContainer" containerID="567d26a9ce8bd511b8fff064b3cc7962e1137324896f3ae196b5ebacefd89892" Mar 14 06:20:30 crc kubenswrapper[4817]: I0314 06:20:30.744396 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" path="/var/lib/kubelet/pods/5b9c285c-e272-4976-b90a-cbca8c3c1c28/volumes" Mar 14 06:20:38 crc kubenswrapper[4817]: I0314 06:20:38.565215 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:20:38 crc kubenswrapper[4817]: I0314 06:20:38.565831 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:20:38 crc kubenswrapper[4817]: I0314 06:20:38.565879 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:20:38 crc kubenswrapper[4817]: I0314 06:20:38.566887 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae702fc959da5668d0a32e1c36194134963e06fd40bf87f3f21cc52386ca0ffe"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:20:38 crc kubenswrapper[4817]: I0314 06:20:38.567045 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://ae702fc959da5668d0a32e1c36194134963e06fd40bf87f3f21cc52386ca0ffe" gracePeriod=600 Mar 14 06:20:39 crc kubenswrapper[4817]: I0314 06:20:39.581636 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="ae702fc959da5668d0a32e1c36194134963e06fd40bf87f3f21cc52386ca0ffe" exitCode=0 Mar 14 06:20:39 crc kubenswrapper[4817]: I0314 06:20:39.581738 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"ae702fc959da5668d0a32e1c36194134963e06fd40bf87f3f21cc52386ca0ffe"} Mar 14 06:20:39 crc kubenswrapper[4817]: I0314 06:20:39.582462 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537"} Mar 14 06:20:39 crc kubenswrapper[4817]: I0314 06:20:39.582501 4817 scope.go:117] "RemoveContainer" containerID="a0fea669a9a06c2911df51b4dc2afd338f97a1981408483490e0ae9336ae0d18" Mar 14 06:20:45 crc kubenswrapper[4817]: I0314 06:20:45.392871 4817 scope.go:117] "RemoveContainer" containerID="7fe2d5926ca53da989aaefeada3baa11deaefb2ad23cfca9a41c23a27240f614" Mar 14 06:21:14 crc kubenswrapper[4817]: I0314 06:21:14.961263 4817 generic.go:334] "Generic (PLEG): container finished" podID="921c813c-e71f-4a7b-b74c-a389c71e1d4f" containerID="930c4f78c343a7f862f7eaacc2b772c58a5ace89436b2aa903219654d7773c57" exitCode=0 Mar 14 06:21:14 crc kubenswrapper[4817]: I0314 06:21:14.961325 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" event={"ID":"921c813c-e71f-4a7b-b74c-a389c71e1d4f","Type":"ContainerDied","Data":"930c4f78c343a7f862f7eaacc2b772c58a5ace89436b2aa903219654d7773c57"} Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.513820 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.674002 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-inventory\") pod \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.674880 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-secret-0\") pod \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.675004 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ssh-key-openstack-edpm-ipam\") pod \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.675041 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-combined-ca-bundle\") pod \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.675261 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ceph\") pod \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.675341 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts5cj\" (UniqueName: \"kubernetes.io/projected/921c813c-e71f-4a7b-b74c-a389c71e1d4f-kube-api-access-ts5cj\") pod \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\" (UID: \"921c813c-e71f-4a7b-b74c-a389c71e1d4f\") " Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.681241 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921c813c-e71f-4a7b-b74c-a389c71e1d4f-kube-api-access-ts5cj" (OuterVolumeSpecName: "kube-api-access-ts5cj") pod "921c813c-e71f-4a7b-b74c-a389c71e1d4f" (UID: "921c813c-e71f-4a7b-b74c-a389c71e1d4f"). InnerVolumeSpecName "kube-api-access-ts5cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.682068 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "921c813c-e71f-4a7b-b74c-a389c71e1d4f" (UID: "921c813c-e71f-4a7b-b74c-a389c71e1d4f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.690644 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ceph" (OuterVolumeSpecName: "ceph") pod "921c813c-e71f-4a7b-b74c-a389c71e1d4f" (UID: "921c813c-e71f-4a7b-b74c-a389c71e1d4f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.707536 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "921c813c-e71f-4a7b-b74c-a389c71e1d4f" (UID: "921c813c-e71f-4a7b-b74c-a389c71e1d4f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.721265 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "921c813c-e71f-4a7b-b74c-a389c71e1d4f" (UID: "921c813c-e71f-4a7b-b74c-a389c71e1d4f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.744631 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-inventory" (OuterVolumeSpecName: "inventory") pod "921c813c-e71f-4a7b-b74c-a389c71e1d4f" (UID: "921c813c-e71f-4a7b-b74c-a389c71e1d4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.778519 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.778572 4817 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.778582 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.778592 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts5cj\" (UniqueName: \"kubernetes.io/projected/921c813c-e71f-4a7b-b74c-a389c71e1d4f-kube-api-access-ts5cj\") on node \"crc\" DevicePath \"\"" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.778602 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.778631 4817 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/921c813c-e71f-4a7b-b74c-a389c71e1d4f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.979920 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" event={"ID":"921c813c-e71f-4a7b-b74c-a389c71e1d4f","Type":"ContainerDied","Data":"7fc0f65baae56b67b2775f0eef66f420ed35e8054816063a1ae6a1ddfee8dc5e"} Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.979988 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc0f65baae56b67b2775f0eef66f420ed35e8054816063a1ae6a1ddfee8dc5e" Mar 14 06:21:16 crc kubenswrapper[4817]: I0314 06:21:16.980071 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.085867 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994"] Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088476 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="extract-content" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088500 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="extract-content" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088516 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="extract-content" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088522 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="extract-content" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088550 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="extract-utilities" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088561 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="extract-utilities" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088575 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="extract-utilities" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088583 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="extract-utilities" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088596 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088604 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088644 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088653 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088669 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088677 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088721 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921c813c-e71f-4a7b-b74c-a389c71e1d4f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088738 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="921c813c-e71f-4a7b-b74c-a389c71e1d4f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088762 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="extract-content" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088770 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="extract-content" Mar 14 06:21:17 crc kubenswrapper[4817]: E0314 06:21:17.088782 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="extract-utilities" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.088789 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="extract-utilities" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.089012 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="756b8f2a-768a-4198-ac30-6dcc15497542" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.089031 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73f6175-582b-4374-b2c5-e63552b98cd2" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.089043 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="921c813c-e71f-4a7b-b74c-a389c71e1d4f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.089057 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9c285c-e272-4976-b90a-cbca8c3c1c28" containerName="registry-server" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.089699 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.092582 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.092786 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.092984 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.093365 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gzc5l" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.093525 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.093659 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.095909 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.095939 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.097284 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.110453 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994"] Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185172 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185374 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185467 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185572 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185614 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185659 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185768 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.185877 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.186006 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.186059 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.186079 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g45\" (UniqueName: \"kubernetes.io/projected/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-kube-api-access-t2g45\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.186225 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.186391 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.287766 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288181 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288211 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288245 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288283 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288361 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288395 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288429 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288455 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g45\" (UniqueName: \"kubernetes.io/projected/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-kube-api-access-t2g45\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288493 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288538 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.288945 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.289076 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.289702 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.290426 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.292965 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.294052 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.294101 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.294522 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.296303 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.297793 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.297855 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.300825 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.300819 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.301335 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.306997 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g45\" (UniqueName: \"kubernetes.io/projected/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-kube-api-access-t2g45\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.409806 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.942068 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994"] Mar 14 06:21:17 crc kubenswrapper[4817]: I0314 06:21:17.996971 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" event={"ID":"78a366f5-7ad6-43e0-be63-4c63cf2b21e8","Type":"ContainerStarted","Data":"2f5ff429de29508859be3cd014109278d8c15ca5980ea77c419d4b0639ec6792"} Mar 14 06:21:19 crc kubenswrapper[4817]: I0314 06:21:19.006794 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" event={"ID":"78a366f5-7ad6-43e0-be63-4c63cf2b21e8","Type":"ContainerStarted","Data":"5878eb3e1a2ffa15604e2abcbff63a3f860259a6fc4d7da839297392aab09b80"} Mar 14 06:21:19 crc kubenswrapper[4817]: I0314 06:21:19.035910 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" podStartSLOduration=1.4861723470000001 podStartE2EDuration="2.035870625s" podCreationTimestamp="2026-03-14 06:21:17 +0000 UTC" firstStartedPulling="2026-03-14 06:21:17.94877452 +0000 UTC m=+2931.987035266" lastFinishedPulling="2026-03-14 06:21:18.498472798 +0000 UTC m=+2932.536733544" observedRunningTime="2026-03-14 06:21:19.028332011 +0000 UTC m=+2933.066592757" watchObservedRunningTime="2026-03-14 06:21:19.035870625 +0000 UTC m=+2933.074131371" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.156932 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557822-lhpn7"] Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.159372 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.163400 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.163751 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.164706 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.165987 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf47p\" (UniqueName: \"kubernetes.io/projected/578c4c2d-5eb1-4d88-947d-155d05b24cec-kube-api-access-bf47p\") pod \"auto-csr-approver-29557822-lhpn7\" (UID: \"578c4c2d-5eb1-4d88-947d-155d05b24cec\") " pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.171754 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557822-lhpn7"] Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.268127 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf47p\" (UniqueName: \"kubernetes.io/projected/578c4c2d-5eb1-4d88-947d-155d05b24cec-kube-api-access-bf47p\") pod \"auto-csr-approver-29557822-lhpn7\" (UID: \"578c4c2d-5eb1-4d88-947d-155d05b24cec\") " pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.296856 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf47p\" (UniqueName: \"kubernetes.io/projected/578c4c2d-5eb1-4d88-947d-155d05b24cec-kube-api-access-bf47p\") pod \"auto-csr-approver-29557822-lhpn7\" (UID: \"578c4c2d-5eb1-4d88-947d-155d05b24cec\") " pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:00 crc kubenswrapper[4817]: I0314 06:22:00.495522 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:01 crc kubenswrapper[4817]: I0314 06:22:01.010388 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557822-lhpn7"] Mar 14 06:22:01 crc kubenswrapper[4817]: I0314 06:22:01.433695 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" event={"ID":"578c4c2d-5eb1-4d88-947d-155d05b24cec","Type":"ContainerStarted","Data":"e1e51bc22e5af3554376524d4789549da8eb30cabe8f109fac614c51f5b9fb36"} Mar 14 06:22:02 crc kubenswrapper[4817]: I0314 06:22:02.443504 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" event={"ID":"578c4c2d-5eb1-4d88-947d-155d05b24cec","Type":"ContainerStarted","Data":"971a84a12e9c0ab4f5585254f513d5a3b4c8ada50bcb4caefa3157e73e2b6917"} Mar 14 06:22:02 crc kubenswrapper[4817]: I0314 06:22:02.463379 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" podStartSLOduration=1.402226417 podStartE2EDuration="2.463360575s" podCreationTimestamp="2026-03-14 06:22:00 +0000 UTC" firstStartedPulling="2026-03-14 06:22:01.01407303 +0000 UTC m=+2975.052333776" lastFinishedPulling="2026-03-14 06:22:02.075207188 +0000 UTC m=+2976.113467934" observedRunningTime="2026-03-14 06:22:02.457340834 +0000 UTC m=+2976.495601580" watchObservedRunningTime="2026-03-14 06:22:02.463360575 +0000 UTC m=+2976.501621321" Mar 14 06:22:03 crc kubenswrapper[4817]: I0314 06:22:03.453660 4817 generic.go:334] "Generic (PLEG): container finished" podID="578c4c2d-5eb1-4d88-947d-155d05b24cec" containerID="971a84a12e9c0ab4f5585254f513d5a3b4c8ada50bcb4caefa3157e73e2b6917" exitCode=0 Mar 14 06:22:03 crc kubenswrapper[4817]: I0314 06:22:03.453772 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" event={"ID":"578c4c2d-5eb1-4d88-947d-155d05b24cec","Type":"ContainerDied","Data":"971a84a12e9c0ab4f5585254f513d5a3b4c8ada50bcb4caefa3157e73e2b6917"} Mar 14 06:22:04 crc kubenswrapper[4817]: I0314 06:22:04.820455 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:04 crc kubenswrapper[4817]: I0314 06:22:04.870255 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf47p\" (UniqueName: \"kubernetes.io/projected/578c4c2d-5eb1-4d88-947d-155d05b24cec-kube-api-access-bf47p\") pod \"578c4c2d-5eb1-4d88-947d-155d05b24cec\" (UID: \"578c4c2d-5eb1-4d88-947d-155d05b24cec\") " Mar 14 06:22:04 crc kubenswrapper[4817]: I0314 06:22:04.880041 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578c4c2d-5eb1-4d88-947d-155d05b24cec-kube-api-access-bf47p" (OuterVolumeSpecName: "kube-api-access-bf47p") pod "578c4c2d-5eb1-4d88-947d-155d05b24cec" (UID: "578c4c2d-5eb1-4d88-947d-155d05b24cec"). InnerVolumeSpecName "kube-api-access-bf47p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:22:04 crc kubenswrapper[4817]: I0314 06:22:04.974649 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf47p\" (UniqueName: \"kubernetes.io/projected/578c4c2d-5eb1-4d88-947d-155d05b24cec-kube-api-access-bf47p\") on node \"crc\" DevicePath \"\"" Mar 14 06:22:05 crc kubenswrapper[4817]: I0314 06:22:05.475415 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" event={"ID":"578c4c2d-5eb1-4d88-947d-155d05b24cec","Type":"ContainerDied","Data":"e1e51bc22e5af3554376524d4789549da8eb30cabe8f109fac614c51f5b9fb36"} Mar 14 06:22:05 crc kubenswrapper[4817]: I0314 06:22:05.475768 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e51bc22e5af3554376524d4789549da8eb30cabe8f109fac614c51f5b9fb36" Mar 14 06:22:05 crc kubenswrapper[4817]: I0314 06:22:05.475511 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557822-lhpn7" Mar 14 06:22:05 crc kubenswrapper[4817]: I0314 06:22:05.916657 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557816-sjhsj"] Mar 14 06:22:05 crc kubenswrapper[4817]: I0314 06:22:05.930542 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557816-sjhsj"] Mar 14 06:22:06 crc kubenswrapper[4817]: I0314 06:22:06.750269 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aeec8d2-99b9-4797-b36e-e0344ccc8b19" path="/var/lib/kubelet/pods/1aeec8d2-99b9-4797-b36e-e0344ccc8b19/volumes" Mar 14 06:22:38 crc kubenswrapper[4817]: I0314 06:22:38.566142 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:22:38 crc kubenswrapper[4817]: I0314 06:22:38.566854 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:22:45 crc kubenswrapper[4817]: I0314 06:22:45.572427 4817 scope.go:117] "RemoveContainer" containerID="b77288481b908d1c7595c2c56820b36446df15976dd19f25c77a258ecf7c8bae" Mar 14 06:23:08 crc kubenswrapper[4817]: I0314 06:23:08.565964 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:23:08 crc kubenswrapper[4817]: I0314 06:23:08.566578 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:23:38 crc kubenswrapper[4817]: I0314 06:23:38.566254 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:23:38 crc kubenswrapper[4817]: I0314 06:23:38.567664 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:23:38 crc kubenswrapper[4817]: I0314 06:23:38.567764 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:23:38 crc kubenswrapper[4817]: I0314 06:23:38.570973 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:23:38 crc kubenswrapper[4817]: I0314 06:23:38.571526 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" gracePeriod=600 Mar 14 06:23:38 crc kubenswrapper[4817]: E0314 06:23:38.697766 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:23:39 crc kubenswrapper[4817]: I0314 06:23:39.452833 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" exitCode=0 Mar 14 06:23:39 crc kubenswrapper[4817]: I0314 06:23:39.453006 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537"} Mar 14 06:23:39 crc kubenswrapper[4817]: I0314 06:23:39.453066 4817 scope.go:117] "RemoveContainer" containerID="ae702fc959da5668d0a32e1c36194134963e06fd40bf87f3f21cc52386ca0ffe" Mar 14 06:23:39 crc kubenswrapper[4817]: I0314 06:23:39.454143 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:23:39 crc kubenswrapper[4817]: E0314 06:23:39.454479 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:23:53 crc kubenswrapper[4817]: I0314 06:23:53.733523 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:23:53 crc kubenswrapper[4817]: E0314 06:23:53.734373 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.151574 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557824-rsgvk"] Mar 14 06:24:00 crc kubenswrapper[4817]: E0314 06:24:00.152720 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578c4c2d-5eb1-4d88-947d-155d05b24cec" containerName="oc" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.152741 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="578c4c2d-5eb1-4d88-947d-155d05b24cec" containerName="oc" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.153003 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="578c4c2d-5eb1-4d88-947d-155d05b24cec" containerName="oc" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.153800 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.156100 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.156787 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.157133 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.174562 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557824-rsgvk"] Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.303130 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmldg\" (UniqueName: \"kubernetes.io/projected/4879cec4-07a4-4b7e-a9be-1e4ac33977ed-kube-api-access-fmldg\") pod \"auto-csr-approver-29557824-rsgvk\" (UID: \"4879cec4-07a4-4b7e-a9be-1e4ac33977ed\") " pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.405161 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmldg\" (UniqueName: \"kubernetes.io/projected/4879cec4-07a4-4b7e-a9be-1e4ac33977ed-kube-api-access-fmldg\") pod \"auto-csr-approver-29557824-rsgvk\" (UID: \"4879cec4-07a4-4b7e-a9be-1e4ac33977ed\") " pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.424642 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmldg\" (UniqueName: \"kubernetes.io/projected/4879cec4-07a4-4b7e-a9be-1e4ac33977ed-kube-api-access-fmldg\") pod \"auto-csr-approver-29557824-rsgvk\" (UID: \"4879cec4-07a4-4b7e-a9be-1e4ac33977ed\") " pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.488815 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.659138 4817 generic.go:334] "Generic (PLEG): container finished" podID="78a366f5-7ad6-43e0-be63-4c63cf2b21e8" containerID="5878eb3e1a2ffa15604e2abcbff63a3f860259a6fc4d7da839297392aab09b80" exitCode=0 Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.659242 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" event={"ID":"78a366f5-7ad6-43e0-be63-4c63cf2b21e8","Type":"ContainerDied","Data":"5878eb3e1a2ffa15604e2abcbff63a3f860259a6fc4d7da839297392aab09b80"} Mar 14 06:24:00 crc kubenswrapper[4817]: I0314 06:24:00.980313 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557824-rsgvk"] Mar 14 06:24:01 crc kubenswrapper[4817]: I0314 06:24:01.668178 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" event={"ID":"4879cec4-07a4-4b7e-a9be-1e4ac33977ed","Type":"ContainerStarted","Data":"38a88a66a1e5ab4492473aa8269c241064ed6a1afbee9775f5ed79a8145b1b04"} Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.123735 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.242867 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-3\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.242966 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2g45\" (UniqueName: \"kubernetes.io/projected/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-kube-api-access-t2g45\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243011 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-0\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243161 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph-nova-0\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243199 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-1\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243225 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-inventory\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243264 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-2\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243345 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-custom-ceph-combined-ca-bundle\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243401 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-0\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243426 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-extra-config-0\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243469 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ssh-key-openstack-edpm-ipam\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243492 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-1\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.243554 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph\") pod \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\" (UID: \"78a366f5-7ad6-43e0-be63-4c63cf2b21e8\") " Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.255380 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph" (OuterVolumeSpecName: "ceph") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.298386 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.311798 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-kube-api-access-t2g45" (OuterVolumeSpecName: "kube-api-access-t2g45") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "kube-api-access-t2g45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.311992 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.314524 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.321218 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-inventory" (OuterVolumeSpecName: "inventory") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.321320 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.331006 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.335018 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.338414 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346407 4817 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346445 4817 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346457 4817 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-inventory\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346468 4817 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346479 4817 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346488 4817 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346497 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346506 4817 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346514 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2g45\" (UniqueName: \"kubernetes.io/projected/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-kube-api-access-t2g45\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.346522 4817 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.348693 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.351460 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.351607 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78a366f5-7ad6-43e0-be63-4c63cf2b21e8" (UID: "78a366f5-7ad6-43e0-be63-4c63cf2b21e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.448482 4817 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.448524 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.448538 4817 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/78a366f5-7ad6-43e0-be63-4c63cf2b21e8-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.682803 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" event={"ID":"78a366f5-7ad6-43e0-be63-4c63cf2b21e8","Type":"ContainerDied","Data":"2f5ff429de29508859be3cd014109278d8c15ca5980ea77c419d4b0639ec6792"} Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.682844 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5ff429de29508859be3cd014109278d8c15ca5980ea77c419d4b0639ec6792" Mar 14 06:24:02 crc kubenswrapper[4817]: I0314 06:24:02.682914 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994" Mar 14 06:24:03 crc kubenswrapper[4817]: I0314 06:24:03.694163 4817 generic.go:334] "Generic (PLEG): container finished" podID="4879cec4-07a4-4b7e-a9be-1e4ac33977ed" containerID="b3df43fbc1536fc262e8ff496113bdc09bf597892243ab9147a033c8bfe1b1b1" exitCode=0 Mar 14 06:24:03 crc kubenswrapper[4817]: I0314 06:24:03.694546 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" event={"ID":"4879cec4-07a4-4b7e-a9be-1e4ac33977ed","Type":"ContainerDied","Data":"b3df43fbc1536fc262e8ff496113bdc09bf597892243ab9147a033c8bfe1b1b1"} Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.035337 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.112854 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmldg\" (UniqueName: \"kubernetes.io/projected/4879cec4-07a4-4b7e-a9be-1e4ac33977ed-kube-api-access-fmldg\") pod \"4879cec4-07a4-4b7e-a9be-1e4ac33977ed\" (UID: \"4879cec4-07a4-4b7e-a9be-1e4ac33977ed\") " Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.119077 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4879cec4-07a4-4b7e-a9be-1e4ac33977ed-kube-api-access-fmldg" (OuterVolumeSpecName: "kube-api-access-fmldg") pod "4879cec4-07a4-4b7e-a9be-1e4ac33977ed" (UID: "4879cec4-07a4-4b7e-a9be-1e4ac33977ed"). InnerVolumeSpecName "kube-api-access-fmldg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.216247 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmldg\" (UniqueName: \"kubernetes.io/projected/4879cec4-07a4-4b7e-a9be-1e4ac33977ed-kube-api-access-fmldg\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.720853 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" event={"ID":"4879cec4-07a4-4b7e-a9be-1e4ac33977ed","Type":"ContainerDied","Data":"38a88a66a1e5ab4492473aa8269c241064ed6a1afbee9775f5ed79a8145b1b04"} Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.720978 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557824-rsgvk" Mar 14 06:24:05 crc kubenswrapper[4817]: I0314 06:24:05.720980 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a88a66a1e5ab4492473aa8269c241064ed6a1afbee9775f5ed79a8145b1b04" Mar 14 06:24:06 crc kubenswrapper[4817]: I0314 06:24:06.107077 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557818-m6dtc"] Mar 14 06:24:06 crc kubenswrapper[4817]: I0314 06:24:06.114278 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557818-m6dtc"] Mar 14 06:24:06 crc kubenswrapper[4817]: I0314 06:24:06.745595 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1911b834-0d38-466a-9549-fce55d991182" path="/var/lib/kubelet/pods/1911b834-0d38-466a-9549-fce55d991182/volumes" Mar 14 06:24:07 crc kubenswrapper[4817]: I0314 06:24:07.732650 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:24:07 crc kubenswrapper[4817]: E0314 06:24:07.733422 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.432377 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 14 06:24:17 crc kubenswrapper[4817]: E0314 06:24:17.433424 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4879cec4-07a4-4b7e-a9be-1e4ac33977ed" containerName="oc" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.433441 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="4879cec4-07a4-4b7e-a9be-1e4ac33977ed" containerName="oc" Mar 14 06:24:17 crc kubenswrapper[4817]: E0314 06:24:17.433460 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a366f5-7ad6-43e0-be63-4c63cf2b21e8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.433469 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a366f5-7ad6-43e0-be63-4c63cf2b21e8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.433643 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="4879cec4-07a4-4b7e-a9be-1e4ac33977ed" containerName="oc" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.433661 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a366f5-7ad6-43e0-be63-4c63cf2b21e8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.434593 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.436811 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.436858 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.459743 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.549559 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.551327 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.553303 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.571809 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.589852 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-run\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.589954 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.589984 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590007 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590038 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590055 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590090 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590301 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590471 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590565 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590612 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590635 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590687 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jc9r\" (UniqueName: \"kubernetes.io/projected/0f16a837-b3ad-4283-bd8e-19512d545253-kube-api-access-8jc9r\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590787 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590839 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.590861 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0f16a837-b3ad-4283-bd8e-19512d545253-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693151 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693553 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693594 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693618 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693644 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693696 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jc9r\" (UniqueName: \"kubernetes.io/projected/0f16a837-b3ad-4283-bd8e-19512d545253-kube-api-access-8jc9r\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693755 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.693933 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694006 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694028 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0f16a837-b3ad-4283-bd8e-19512d545253-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694143 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694133 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694180 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-lib-modules\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694262 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-sys\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694303 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-run\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694326 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694355 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-scripts\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694375 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-run\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694396 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694462 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694499 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694525 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694561 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2z8p\" (UniqueName: \"kubernetes.io/projected/1b031afc-6d59-484d-8490-f684bbad769f-kube-api-access-t2z8p\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694597 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694623 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694656 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-run\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694667 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694692 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694723 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-config-data\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694750 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-dev\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694774 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694800 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694845 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694874 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694878 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.694970 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b031afc-6d59-484d-8490-f684bbad769f-ceph\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.695009 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.695103 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.695148 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.695179 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.695184 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.695230 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f16a837-b3ad-4283-bd8e-19512d545253-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.700479 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.700529 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0f16a837-b3ad-4283-bd8e-19512d545253-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.700776 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.701000 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.702496 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f16a837-b3ad-4283-bd8e-19512d545253-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.719905 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jc9r\" (UniqueName: \"kubernetes.io/projected/0f16a837-b3ad-4283-bd8e-19512d545253-kube-api-access-8jc9r\") pod \"cinder-volume-volume1-0\" (UID: \"0f16a837-b3ad-4283-bd8e-19512d545253\") " pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.760732 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799152 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799254 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-scripts\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799325 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799411 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2z8p\" (UniqueName: \"kubernetes.io/projected/1b031afc-6d59-484d-8490-f684bbad769f-kube-api-access-t2z8p\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799478 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-run\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799507 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799528 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-config-data\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799564 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-dev\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799581 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.799637 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.800042 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-run\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.800361 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.800372 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b031afc-6d59-484d-8490-f684bbad769f-ceph\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.800745 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.800856 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.801428 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.801554 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-lib-modules\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.801688 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-sys\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.802027 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-sys\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.802194 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-dev\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.802968 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.803641 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.804093 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.804109 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.804122 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.806544 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.806763 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-scripts\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.806825 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b031afc-6d59-484d-8490-f684bbad769f-lib-modules\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.810211 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b031afc-6d59-484d-8490-f684bbad769f-ceph\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.817337 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-config-data\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.819473 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b031afc-6d59-484d-8490-f684bbad769f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.835596 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2z8p\" (UniqueName: \"kubernetes.io/projected/1b031afc-6d59-484d-8490-f684bbad769f-kube-api-access-t2z8p\") pod \"cinder-backup-0\" (UID: \"1b031afc-6d59-484d-8490-f684bbad769f\") " pod="openstack/cinder-backup-0" Mar 14 06:24:17 crc kubenswrapper[4817]: I0314 06:24:17.870038 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.130527 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-9wfp5"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.132736 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.141845 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-9wfp5"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.208582 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-30bc-account-create-update-d8bsz"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.210350 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.217732 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.219666 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986e1019-bba0-4ef2-9b54-0929a563895b-operator-scripts\") pod \"manila-db-create-9wfp5\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.219793 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4nx\" (UniqueName: \"kubernetes.io/projected/986e1019-bba0-4ef2-9b54-0929a563895b-kube-api-access-4x4nx\") pod \"manila-db-create-9wfp5\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.219928 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-30bc-account-create-update-d8bsz"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.238781 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7fc7cb4589-8j695"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.242136 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.248138 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-k27sd" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.248498 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.248598 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.248839 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.252087 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fc7cb4589-8j695"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.282841 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: W0314 06:24:18.295049 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f16a837_b3ad_4283_bd8e_19512d545253.slice/crio-37ffd7d5e89b969c6e1b2d95f34852e0ddeddfd5df7095044033959cb317cc23 WatchSource:0}: Error finding container 37ffd7d5e89b969c6e1b2d95f34852e0ddeddfd5df7095044033959cb317cc23: Status 404 returned error can't find the container with id 37ffd7d5e89b969c6e1b2d95f34852e0ddeddfd5df7095044033959cb317cc23 Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.321823 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4nx\" (UniqueName: \"kubernetes.io/projected/986e1019-bba0-4ef2-9b54-0929a563895b-kube-api-access-4x4nx\") pod \"manila-db-create-9wfp5\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.321918 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-config-data\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.321957 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmjq\" (UniqueName: \"kubernetes.io/projected/e4d31186-90a1-4a20-a042-417e1ed712c6-kube-api-access-zmmjq\") pod \"manila-30bc-account-create-update-d8bsz\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.321987 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-horizon-secret-key\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.322023 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d31186-90a1-4a20-a042-417e1ed712c6-operator-scripts\") pod \"manila-30bc-account-create-update-d8bsz\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.322044 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986e1019-bba0-4ef2-9b54-0929a563895b-operator-scripts\") pod \"manila-db-create-9wfp5\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.322074 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65lhq\" (UniqueName: \"kubernetes.io/projected/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-kube-api-access-65lhq\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.322103 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-scripts\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.322155 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-logs\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.327037 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986e1019-bba0-4ef2-9b54-0929a563895b-operator-scripts\") pod \"manila-db-create-9wfp5\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.357975 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.359633 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.370930 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.370930 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pxpbs" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.374062 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.374356 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.386732 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4nx\" (UniqueName: \"kubernetes.io/projected/986e1019-bba0-4ef2-9b54-0929a563895b-kube-api-access-4x4nx\") pod \"manila-db-create-9wfp5\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.398019 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d4f6d988c-zkbs5"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.399876 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.422472 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424265 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424347 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65lhq\" (UniqueName: \"kubernetes.io/projected/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-kube-api-access-65lhq\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424396 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-scripts\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424432 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424462 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424497 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424530 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-logs\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424596 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424638 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475ws\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-kube-api-access-475ws\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424663 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424690 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-config-data\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424717 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-logs\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424756 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-ceph\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424784 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmjq\" (UniqueName: \"kubernetes.io/projected/e4d31186-90a1-4a20-a042-417e1ed712c6-kube-api-access-zmmjq\") pod \"manila-30bc-account-create-update-d8bsz\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424816 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-horizon-secret-key\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.424869 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d31186-90a1-4a20-a042-417e1ed712c6-operator-scripts\") pod \"manila-30bc-account-create-update-d8bsz\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.426030 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-scripts\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.426499 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-config-data\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.426745 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-logs\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.429261 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d31186-90a1-4a20-a042-417e1ed712c6-operator-scripts\") pod \"manila-30bc-account-create-update-d8bsz\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.430847 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-horizon-secret-key\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.441015 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d4f6d988c-zkbs5"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.447871 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65lhq\" (UniqueName: \"kubernetes.io/projected/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-kube-api-access-65lhq\") pod \"horizon-7fc7cb4589-8j695\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.448283 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmjq\" (UniqueName: \"kubernetes.io/projected/e4d31186-90a1-4a20-a042-417e1ed712c6-kube-api-access-zmmjq\") pod \"manila-30bc-account-create-update-d8bsz\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.465416 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.470583 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.473646 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.479560 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.481803 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.495544 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526683 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526806 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526839 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81ec49d8-d391-457a-ac99-01d35c496fa1-horizon-secret-key\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526862 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdk6m\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-kube-api-access-sdk6m\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526884 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526930 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526953 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.526981 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprgp\" (UniqueName: \"kubernetes.io/projected/81ec49d8-d391-457a-ac99-01d35c496fa1-kube-api-access-zprgp\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530038 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530156 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530219 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475ws\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-kube-api-access-475ws\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530255 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-ceph\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530280 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530315 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-logs\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530333 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-logs\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530387 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-ceph\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530447 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ec49d8-d391-457a-ac99-01d35c496fa1-logs\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530478 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-scripts\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530498 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530535 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530639 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530720 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.530752 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-config-data\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.531171 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.532029 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-logs\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.532796 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.533207 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.535649 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.536006 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.537223 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: E0314 06:24:18.537358 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance kube-api-access-475ws public-tls-certs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="fc059663-56a4-43c2-973e-041a6a561f0b" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.538799 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.540167 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-ceph\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.549339 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.558997 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475ws\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-kube-api-access-475ws\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.586490 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.588865 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.631792 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633316 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633361 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprgp\" (UniqueName: \"kubernetes.io/projected/81ec49d8-d391-457a-ac99-01d35c496fa1-kube-api-access-zprgp\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633409 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633441 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-ceph\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633465 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-logs\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633499 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ec49d8-d391-457a-ac99-01d35c496fa1-logs\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633534 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-scripts\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633549 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633570 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633633 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633659 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-config-data\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633676 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633707 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81ec49d8-d391-457a-ac99-01d35c496fa1-horizon-secret-key\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633724 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdk6m\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-kube-api-access-sdk6m\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.633949 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.647056 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-config-data\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.648300 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-scripts\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.655082 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ec49d8-d391-457a-ac99-01d35c496fa1-logs\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.657688 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-logs\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.658021 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-config-data\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.658618 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81ec49d8-d391-457a-ac99-01d35c496fa1-horizon-secret-key\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.660582 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprgp\" (UniqueName: \"kubernetes.io/projected/81ec49d8-d391-457a-ac99-01d35c496fa1-kube-api-access-zprgp\") pod \"horizon-6d4f6d988c-zkbs5\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.665762 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.667403 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-scripts\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.668334 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.668443 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.670091 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdk6m\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-kube-api-access-sdk6m\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.671145 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.677397 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-ceph\") pod \"glance-default-internal-api-0\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.726553 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.809710 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.921040 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1b031afc-6d59-484d-8490-f684bbad769f","Type":"ContainerStarted","Data":"c9c1378fcb6f9173549567097b2ba4468acf0a992119ce7582d3061bd3c63873"} Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.979137 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:18 crc kubenswrapper[4817]: I0314 06:24:18.979165 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0f16a837-b3ad-4283-bd8e-19512d545253","Type":"ContainerStarted","Data":"37ffd7d5e89b969c6e1b2d95f34852e0ddeddfd5df7095044033959cb317cc23"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.008704 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.072264 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-9wfp5"] Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157495 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-scripts\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157544 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-config-data\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157577 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-combined-ca-bundle\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157607 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475ws\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-kube-api-access-475ws\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157636 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-logs\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157686 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-ceph\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157755 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-httpd-run\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157826 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.157853 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-public-tls-certs\") pod \"fc059663-56a4-43c2-973e-041a6a561f0b\" (UID: \"fc059663-56a4-43c2-973e-041a6a561f0b\") " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.165920 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-logs" (OuterVolumeSpecName: "logs") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.166161 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.167656 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-scripts" (OuterVolumeSpecName: "scripts") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.168348 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.169720 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-config-data" (OuterVolumeSpecName: "config-data") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.170339 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.174181 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-ceph" (OuterVolumeSpecName: "ceph") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.178040 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.179660 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-kube-api-access-475ws" (OuterVolumeSpecName: "kube-api-access-475ws") pod "fc059663-56a4-43c2-973e-041a6a561f0b" (UID: "fc059663-56a4-43c2-973e-041a6a561f0b"). InnerVolumeSpecName "kube-api-access-475ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.260574 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.260714 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.260772 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.260854 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475ws\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-kube-api-access-475ws\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.260949 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.261022 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fc059663-56a4-43c2-973e-041a6a561f0b-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.261137 4817 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc059663-56a4-43c2-973e-041a6a561f0b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.261259 4817 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.261343 4817 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc059663-56a4-43c2-973e-041a6a561f0b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.271608 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-30bc-account-create-update-d8bsz"] Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.308842 4817 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.380431 4817 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.431663 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d4f6d988c-zkbs5"] Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.489943 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fc7cb4589-8j695"] Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.588359 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:19 crc kubenswrapper[4817]: W0314 06:24:19.714518 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68f022c7_6db3_425f_85e7_6c7c46a14b42.slice/crio-c633138799fa201262320ed9b885b8e313eabfbbd67b93dbe0ad3cf49611d56b WatchSource:0}: Error finding container c633138799fa201262320ed9b885b8e313eabfbbd67b93dbe0ad3cf49611d56b: Status 404 returned error can't find the container with id c633138799fa201262320ed9b885b8e313eabfbbd67b93dbe0ad3cf49611d56b Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.990676 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc7cb4589-8j695" event={"ID":"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7","Type":"ContainerStarted","Data":"1e5a02b050d9a838d2f2dd4a96a19cf7c27103435711db7f8b988da2d2c9ad77"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.993110 4817 generic.go:334] "Generic (PLEG): container finished" podID="e4d31186-90a1-4a20-a042-417e1ed712c6" containerID="41ebf9ccb10c7a2e2ed984c8ba16824c4c89733f84f3566fcf02ed3d9b22a5fd" exitCode=0 Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.993320 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-30bc-account-create-update-d8bsz" event={"ID":"e4d31186-90a1-4a20-a042-417e1ed712c6","Type":"ContainerDied","Data":"41ebf9ccb10c7a2e2ed984c8ba16824c4c89733f84f3566fcf02ed3d9b22a5fd"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.993340 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-30bc-account-create-update-d8bsz" event={"ID":"e4d31186-90a1-4a20-a042-417e1ed712c6","Type":"ContainerStarted","Data":"f73e91d6b3c24ac461fdf9dcd5abe8359ff05b5e2572abf98d39e11f06fcce7d"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.994794 4817 generic.go:334] "Generic (PLEG): container finished" podID="986e1019-bba0-4ef2-9b54-0929a563895b" containerID="4b4c79f96e16812f9625ed94592b3ae15b42ae76beb21f749e53e52b413e8159" exitCode=0 Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.994841 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9wfp5" event={"ID":"986e1019-bba0-4ef2-9b54-0929a563895b","Type":"ContainerDied","Data":"4b4c79f96e16812f9625ed94592b3ae15b42ae76beb21f749e53e52b413e8159"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.994858 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9wfp5" event={"ID":"986e1019-bba0-4ef2-9b54-0929a563895b","Type":"ContainerStarted","Data":"45f2a1d711ebe0ed249a583c7e1c0af0fc6e3ccc59ec0c93256e4ddd68598298"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.995614 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4f6d988c-zkbs5" event={"ID":"81ec49d8-d391-457a-ac99-01d35c496fa1","Type":"ContainerStarted","Data":"f5a9ed74b5a454da4aa0c48ebf419a36e1adf7be6439d835f7ae05cdebcd3535"} Mar 14 06:24:19 crc kubenswrapper[4817]: I0314 06:24:19.996374 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.004589 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68f022c7-6db3-425f-85e7-6c7c46a14b42","Type":"ContainerStarted","Data":"c633138799fa201262320ed9b885b8e313eabfbbd67b93dbe0ad3cf49611d56b"} Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.167578 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.168045 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.189787 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.198047 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.214599 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.215695 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.217793 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.323867 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-logs\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324446 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324489 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-ceph\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324678 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324708 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324730 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8hwp\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-kube-api-access-m8hwp\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324764 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.324795 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.325043 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.426844 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427151 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427173 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8hwp\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-kube-api-access-m8hwp\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427237 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427275 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427321 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427341 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-logs\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427377 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427415 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-ceph\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.427576 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.428299 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-logs\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.428574 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.432059 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.432449 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.434153 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.434565 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-ceph\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.435315 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.446409 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8hwp\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-kube-api-access-m8hwp\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.464886 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.533967 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.734347 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:24:20 crc kubenswrapper[4817]: E0314 06:24:20.734797 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.810652 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc059663-56a4-43c2-973e-041a6a561f0b" path="/var/lib/kubelet/pods/fc059663-56a4-43c2-973e-041a6a561f0b/volumes" Mar 14 06:24:20 crc kubenswrapper[4817]: I0314 06:24:20.983598 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d4f6d988c-zkbs5"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.029187 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-559cff965b-fnvkc"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.030921 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.035262 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.084012 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68f022c7-6db3-425f-85e7-6c7c46a14b42","Type":"ContainerStarted","Data":"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29"} Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.147186 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0f16a837-b3ad-4283-bd8e-19512d545253","Type":"ContainerStarted","Data":"c70d1be40fecfce5f1a17c6d9953a1853c809d20388493b6e0b28979268ab12d"} Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.147249 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0f16a837-b3ad-4283-bd8e-19512d545253","Type":"ContainerStarted","Data":"2bb5a88938a0c7a534a74f949ec93eaa3453549caf26435dc2e49ead9bc34138"} Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158100 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-scripts\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158181 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-tls-certs\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158221 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-combined-ca-bundle\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158274 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-config-data\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158344 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56147eb2-93e8-41a7-b6c9-4f40e07263c8-logs\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158417 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmpp\" (UniqueName: \"kubernetes.io/projected/56147eb2-93e8-41a7-b6c9-4f40e07263c8-kube-api-access-rfmpp\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.158455 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-secret-key\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.170536 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.194180 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559cff965b-fnvkc"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.217927 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1b031afc-6d59-484d-8490-f684bbad769f","Type":"ContainerStarted","Data":"5d83dcb60b1c10b9bdfda06865f69ca1878358c95b40fd21bc8006d53644edba"} Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.218045 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"1b031afc-6d59-484d-8490-f684bbad769f","Type":"ContainerStarted","Data":"d66e4828adbbad4529f51a4efc423414bb494d24cfce6a42b08904e093c59c31"} Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.218061 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fc7cb4589-8j695"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.261727 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-scripts\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.261792 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-tls-certs\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.261855 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-combined-ca-bundle\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.261946 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-config-data\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.262128 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56147eb2-93e8-41a7-b6c9-4f40e07263c8-logs\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.262185 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmpp\" (UniqueName: \"kubernetes.io/projected/56147eb2-93e8-41a7-b6c9-4f40e07263c8-kube-api-access-rfmpp\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.262244 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-secret-key\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.272503 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-767cf48f8d-lxbdx"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.274316 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.282231 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56147eb2-93e8-41a7-b6c9-4f40e07263c8-logs\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.284659 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-scripts\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.286543 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-combined-ca-bundle\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.286837 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-secret-key\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.294220 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.314974 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmpp\" (UniqueName: \"kubernetes.io/projected/56147eb2-93e8-41a7-b6c9-4f40e07263c8-kube-api-access-rfmpp\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.315277 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-config-data\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.326282 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.874595139 podStartE2EDuration="4.326258792s" podCreationTimestamp="2026-03-14 06:24:17 +0000 UTC" firstStartedPulling="2026-03-14 06:24:18.305955739 +0000 UTC m=+3112.344216485" lastFinishedPulling="2026-03-14 06:24:19.757619392 +0000 UTC m=+3113.795880138" observedRunningTime="2026-03-14 06:24:21.194132666 +0000 UTC m=+3115.232393412" watchObservedRunningTime="2026-03-14 06:24:21.326258792 +0000 UTC m=+3115.364519538" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.326015 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-767cf48f8d-lxbdx"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.333051 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-tls-certs\") pod \"horizon-559cff965b-fnvkc\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.362054 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368522 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abfb19f7-bac6-45a5-953e-546d46435171-scripts\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368565 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-combined-ca-bundle\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368650 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abfb19f7-bac6-45a5-953e-546d46435171-config-data\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368695 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6jf\" (UniqueName: \"kubernetes.io/projected/abfb19f7-bac6-45a5-953e-546d46435171-kube-api-access-lc6jf\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368737 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abfb19f7-bac6-45a5-953e-546d46435171-logs\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368818 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-horizon-secret-key\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.368869 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-horizon-tls-certs\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.386690 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.288792543 podStartE2EDuration="4.386658514s" podCreationTimestamp="2026-03-14 06:24:17 +0000 UTC" firstStartedPulling="2026-03-14 06:24:18.676948419 +0000 UTC m=+3112.715209165" lastFinishedPulling="2026-03-14 06:24:19.77481439 +0000 UTC m=+3113.813075136" observedRunningTime="2026-03-14 06:24:21.261042383 +0000 UTC m=+3115.299303129" watchObservedRunningTime="2026-03-14 06:24:21.386658514 +0000 UTC m=+3115.424919260" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.402260 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.473344 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abfb19f7-bac6-45a5-953e-546d46435171-logs\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.473590 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-horizon-secret-key\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.473683 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-horizon-tls-certs\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.473719 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abfb19f7-bac6-45a5-953e-546d46435171-scripts\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.473783 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-combined-ca-bundle\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.473965 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abfb19f7-bac6-45a5-953e-546d46435171-config-data\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.474002 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc6jf\" (UniqueName: \"kubernetes.io/projected/abfb19f7-bac6-45a5-953e-546d46435171-kube-api-access-lc6jf\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.476361 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abfb19f7-bac6-45a5-953e-546d46435171-logs\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.477328 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abfb19f7-bac6-45a5-953e-546d46435171-scripts\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.478612 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abfb19f7-bac6-45a5-953e-546d46435171-config-data\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.479511 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-horizon-tls-certs\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.481976 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-combined-ca-bundle\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.487082 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abfb19f7-bac6-45a5-953e-546d46435171-horizon-secret-key\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.502227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc6jf\" (UniqueName: \"kubernetes.io/projected/abfb19f7-bac6-45a5-953e-546d46435171-kube-api-access-lc6jf\") pod \"horizon-767cf48f8d-lxbdx\" (UID: \"abfb19f7-bac6-45a5-953e-546d46435171\") " pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.717623 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.748863 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.886305 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d31186-90a1-4a20-a042-417e1ed712c6-operator-scripts\") pod \"e4d31186-90a1-4a20-a042-417e1ed712c6\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.886935 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmmjq\" (UniqueName: \"kubernetes.io/projected/e4d31186-90a1-4a20-a042-417e1ed712c6-kube-api-access-zmmjq\") pod \"e4d31186-90a1-4a20-a042-417e1ed712c6\" (UID: \"e4d31186-90a1-4a20-a042-417e1ed712c6\") " Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.889696 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d31186-90a1-4a20-a042-417e1ed712c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4d31186-90a1-4a20-a042-417e1ed712c6" (UID: "e4d31186-90a1-4a20-a042-417e1ed712c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.917178 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d31186-90a1-4a20-a042-417e1ed712c6-kube-api-access-zmmjq" (OuterVolumeSpecName: "kube-api-access-zmmjq") pod "e4d31186-90a1-4a20-a042-417e1ed712c6" (UID: "e4d31186-90a1-4a20-a042-417e1ed712c6"). InnerVolumeSpecName "kube-api-access-zmmjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.919172 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.991105 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986e1019-bba0-4ef2-9b54-0929a563895b-operator-scripts\") pod \"986e1019-bba0-4ef2-9b54-0929a563895b\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.991174 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x4nx\" (UniqueName: \"kubernetes.io/projected/986e1019-bba0-4ef2-9b54-0929a563895b-kube-api-access-4x4nx\") pod \"986e1019-bba0-4ef2-9b54-0929a563895b\" (UID: \"986e1019-bba0-4ef2-9b54-0929a563895b\") " Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.992156 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/986e1019-bba0-4ef2-9b54-0929a563895b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "986e1019-bba0-4ef2-9b54-0929a563895b" (UID: "986e1019-bba0-4ef2-9b54-0929a563895b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.994052 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/986e1019-bba0-4ef2-9b54-0929a563895b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.994087 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmmjq\" (UniqueName: \"kubernetes.io/projected/e4d31186-90a1-4a20-a042-417e1ed712c6-kube-api-access-zmmjq\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:21 crc kubenswrapper[4817]: I0314 06:24:21.994098 4817 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d31186-90a1-4a20-a042-417e1ed712c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:21.999939 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986e1019-bba0-4ef2-9b54-0929a563895b-kube-api-access-4x4nx" (OuterVolumeSpecName: "kube-api-access-4x4nx") pod "986e1019-bba0-4ef2-9b54-0929a563895b" (UID: "986e1019-bba0-4ef2-9b54-0929a563895b"). InnerVolumeSpecName "kube-api-access-4x4nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.091238 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559cff965b-fnvkc"] Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.097545 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x4nx\" (UniqueName: \"kubernetes.io/projected/986e1019-bba0-4ef2-9b54-0929a563895b-kube-api-access-4x4nx\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.225089 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9wfp5" event={"ID":"986e1019-bba0-4ef2-9b54-0929a563895b","Type":"ContainerDied","Data":"45f2a1d711ebe0ed249a583c7e1c0af0fc6e3ccc59ec0c93256e4ddd68598298"} Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.225148 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f2a1d711ebe0ed249a583c7e1c0af0fc6e3ccc59ec0c93256e4ddd68598298" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.225374 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9wfp5" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.235796 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-log" containerID="cri-o://2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29" gracePeriod=30 Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.236012 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-httpd" containerID="cri-o://1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f" gracePeriod=30 Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.237008 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68f022c7-6db3-425f-85e7-6c7c46a14b42","Type":"ContainerStarted","Data":"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f"} Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.256199 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-30bc-account-create-update-d8bsz" event={"ID":"e4d31186-90a1-4a20-a042-417e1ed712c6","Type":"ContainerDied","Data":"f73e91d6b3c24ac461fdf9dcd5abe8359ff05b5e2572abf98d39e11f06fcce7d"} Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.256247 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f73e91d6b3c24ac461fdf9dcd5abe8359ff05b5e2572abf98d39e11f06fcce7d" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.256316 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-30bc-account-create-update-d8bsz" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.259521 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05c69b85-1847-4e2a-9df8-a3759b0bfcfb","Type":"ContainerStarted","Data":"bdea71543e25283db59fb12d048494123dd002cef524b6a99d895d8fc257940f"} Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.270055 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559cff965b-fnvkc" event={"ID":"56147eb2-93e8-41a7-b6c9-4f40e07263c8","Type":"ContainerStarted","Data":"5e574b93915140f4ee944e6dd0d0f427a2f8bf7c7a2593ce4c7707602be0d224"} Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.283933 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.283778993 podStartE2EDuration="4.283778993s" podCreationTimestamp="2026-03-14 06:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:24:22.261561673 +0000 UTC m=+3116.299822439" watchObservedRunningTime="2026-03-14 06:24:22.283778993 +0000 UTC m=+3116.322039739" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.365869 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-767cf48f8d-lxbdx"] Mar 14 06:24:22 crc kubenswrapper[4817]: W0314 06:24:22.403070 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabfb19f7_bac6_45a5_953e_546d46435171.slice/crio-045ebb4329760dc8d13719a917826222a48b1519232169ebb793c35335cf7850 WatchSource:0}: Error finding container 045ebb4329760dc8d13719a917826222a48b1519232169ebb793c35335cf7850: Status 404 returned error can't find the container with id 045ebb4329760dc8d13719a917826222a48b1519232169ebb793c35335cf7850 Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.767459 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:22 crc kubenswrapper[4817]: I0314 06:24:22.871599 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.010101 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.030837 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-httpd-run\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.030968 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdk6m\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-kube-api-access-sdk6m\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031039 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-config-data\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031126 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-internal-tls-certs\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031249 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-scripts\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031289 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-logs\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031405 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031456 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-ceph\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031538 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-combined-ca-bundle\") pod \"68f022c7-6db3-425f-85e7-6c7c46a14b42\" (UID: \"68f022c7-6db3-425f-85e7-6c7c46a14b42\") " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.031960 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.032665 4817 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.032917 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-logs" (OuterVolumeSpecName: "logs") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.037870 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.041819 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-scripts" (OuterVolumeSpecName: "scripts") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.047662 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-ceph" (OuterVolumeSpecName: "ceph") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.066320 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-kube-api-access-sdk6m" (OuterVolumeSpecName: "kube-api-access-sdk6m") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "kube-api-access-sdk6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.096109 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.111714 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.134909 4817 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.134970 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.134985 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.135000 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdk6m\" (UniqueName: \"kubernetes.io/projected/68f022c7-6db3-425f-85e7-6c7c46a14b42-kube-api-access-sdk6m\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.135015 4817 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.135026 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.135037 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f022c7-6db3-425f-85e7-6c7c46a14b42-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.157437 4817 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.172197 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-config-data" (OuterVolumeSpecName: "config-data") pod "68f022c7-6db3-425f-85e7-6c7c46a14b42" (UID: "68f022c7-6db3-425f-85e7-6c7c46a14b42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.238044 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f022c7-6db3-425f-85e7-6c7c46a14b42-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.238088 4817 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296718 4817 generic.go:334] "Generic (PLEG): container finished" podID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerID="1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f" exitCode=0 Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296754 4817 generic.go:334] "Generic (PLEG): container finished" podID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerID="2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29" exitCode=143 Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296825 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296861 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68f022c7-6db3-425f-85e7-6c7c46a14b42","Type":"ContainerDied","Data":"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f"} Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296912 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68f022c7-6db3-425f-85e7-6c7c46a14b42","Type":"ContainerDied","Data":"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29"} Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296927 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"68f022c7-6db3-425f-85e7-6c7c46a14b42","Type":"ContainerDied","Data":"c633138799fa201262320ed9b885b8e313eabfbbd67b93dbe0ad3cf49611d56b"} Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.296945 4817 scope.go:117] "RemoveContainer" containerID="1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.305352 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-767cf48f8d-lxbdx" event={"ID":"abfb19f7-bac6-45a5-953e-546d46435171","Type":"ContainerStarted","Data":"045ebb4329760dc8d13719a917826222a48b1519232169ebb793c35335cf7850"} Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.308836 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05c69b85-1847-4e2a-9df8-a3759b0bfcfb","Type":"ContainerStarted","Data":"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d"} Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.341940 4817 scope.go:117] "RemoveContainer" containerID="2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.346482 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.357653 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.410031 4817 scope.go:117] "RemoveContainer" containerID="1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.411094 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:23 crc kubenswrapper[4817]: E0314 06:24:23.411650 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d31186-90a1-4a20-a042-417e1ed712c6" containerName="mariadb-account-create-update" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.411675 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d31186-90a1-4a20-a042-417e1ed712c6" containerName="mariadb-account-create-update" Mar 14 06:24:23 crc kubenswrapper[4817]: E0314 06:24:23.411701 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986e1019-bba0-4ef2-9b54-0929a563895b" containerName="mariadb-database-create" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.411713 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="986e1019-bba0-4ef2-9b54-0929a563895b" containerName="mariadb-database-create" Mar 14 06:24:23 crc kubenswrapper[4817]: E0314 06:24:23.411752 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-log" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.411758 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-log" Mar 14 06:24:23 crc kubenswrapper[4817]: E0314 06:24:23.411774 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-httpd" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.411781 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-httpd" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.411987 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d31186-90a1-4a20-a042-417e1ed712c6" containerName="mariadb-account-create-update" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.412000 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-httpd" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.412014 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" containerName="glance-log" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.412025 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="986e1019-bba0-4ef2-9b54-0929a563895b" containerName="mariadb-database-create" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.413555 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: E0314 06:24:23.413869 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f\": container with ID starting with 1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f not found: ID does not exist" containerID="1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.413990 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f"} err="failed to get container status \"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f\": rpc error: code = NotFound desc = could not find container \"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f\": container with ID starting with 1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f not found: ID does not exist" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.414025 4817 scope.go:117] "RemoveContainer" containerID="2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29" Mar 14 06:24:23 crc kubenswrapper[4817]: E0314 06:24:23.414619 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29\": container with ID starting with 2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29 not found: ID does not exist" containerID="2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.414642 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29"} err="failed to get container status \"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29\": rpc error: code = NotFound desc = could not find container \"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29\": container with ID starting with 2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29 not found: ID does not exist" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.414656 4817 scope.go:117] "RemoveContainer" containerID="1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.416299 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f"} err="failed to get container status \"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f\": rpc error: code = NotFound desc = could not find container \"1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f\": container with ID starting with 1d02ada1da132fc02db8df0954dfb6e3b09e876e3f686be4672d153681db832f not found: ID does not exist" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.416315 4817 scope.go:117] "RemoveContainer" containerID="2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.417063 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.417256 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.419572 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29"} err="failed to get container status \"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29\": rpc error: code = NotFound desc = could not find container \"2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29\": container with ID starting with 2ccebfa381bfeef729525cf901030cb150548057b2f9583b8b1bc837af93ed29 not found: ID does not exist" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450200 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4jm\" (UniqueName: \"kubernetes.io/projected/ebec1c57-27fe-4039-acbc-ddfb06dede94-kube-api-access-jv4jm\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450317 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450391 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450433 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450463 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebec1c57-27fe-4039-acbc-ddfb06dede94-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450544 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebec1c57-27fe-4039-acbc-ddfb06dede94-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450576 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450602 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.450666 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebec1c57-27fe-4039-acbc-ddfb06dede94-logs\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.451853 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.534988 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-k845r"] Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.536264 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552109 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-k845r"] Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552212 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebec1c57-27fe-4039-acbc-ddfb06dede94-logs\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552275 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-config-data\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552306 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv4jm\" (UniqueName: \"kubernetes.io/projected/ebec1c57-27fe-4039-acbc-ddfb06dede94-kube-api-access-jv4jm\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552367 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552408 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552429 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552445 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebec1c57-27fe-4039-acbc-ddfb06dede94-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552463 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-job-config-data\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552481 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4ljx\" (UniqueName: \"kubernetes.io/projected/2f0b273c-0bba-481a-85db-ce740bae29d2-kube-api-access-b4ljx\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552567 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-combined-ca-bundle\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552603 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebec1c57-27fe-4039-acbc-ddfb06dede94-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552622 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.552643 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.554357 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.554568 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebec1c57-27fe-4039-acbc-ddfb06dede94-logs\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.554986 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-dtvxp" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.555193 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.557421 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebec1c57-27fe-4039-acbc-ddfb06dede94-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.590236 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ebec1c57-27fe-4039-acbc-ddfb06dede94-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.590405 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.591076 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.593175 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.594768 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebec1c57-27fe-4039-acbc-ddfb06dede94-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.598257 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv4jm\" (UniqueName: \"kubernetes.io/projected/ebec1c57-27fe-4039-acbc-ddfb06dede94-kube-api-access-jv4jm\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.638621 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ebec1c57-27fe-4039-acbc-ddfb06dede94\") " pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.656113 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-job-config-data\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.656180 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4ljx\" (UniqueName: \"kubernetes.io/projected/2f0b273c-0bba-481a-85db-ce740bae29d2-kube-api-access-b4ljx\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.656382 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-combined-ca-bundle\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.656641 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-config-data\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.662430 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-config-data\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.663130 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-job-config-data\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.665012 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-combined-ca-bundle\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.673250 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4ljx\" (UniqueName: \"kubernetes.io/projected/2f0b273c-0bba-481a-85db-ce740bae29d2-kube-api-access-b4ljx\") pod \"manila-db-sync-k845r\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " pod="openstack/manila-db-sync-k845r" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.752517 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:23 crc kubenswrapper[4817]: I0314 06:24:23.911647 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-k845r" Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.346000 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05c69b85-1847-4e2a-9df8-a3759b0bfcfb","Type":"ContainerStarted","Data":"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27"} Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.346537 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-log" containerID="cri-o://a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d" gracePeriod=30 Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.346774 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-httpd" containerID="cri-o://47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27" gracePeriod=30 Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.367078 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.3670559860000004 podStartE2EDuration="4.367055986s" podCreationTimestamp="2026-03-14 06:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:24:24.366274754 +0000 UTC m=+3118.404535500" watchObservedRunningTime="2026-03-14 06:24:24.367055986 +0000 UTC m=+3118.405316732" Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.520388 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.655196 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-k845r"] Mar 14 06:24:24 crc kubenswrapper[4817]: W0314 06:24:24.659189 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f0b273c_0bba_481a_85db_ce740bae29d2.slice/crio-cd14afce53734bb19341a38172e96bad1a7f82a150f60f973eae5461a7a01f48 WatchSource:0}: Error finding container cd14afce53734bb19341a38172e96bad1a7f82a150f60f973eae5461a7a01f48: Status 404 returned error can't find the container with id cd14afce53734bb19341a38172e96bad1a7f82a150f60f973eae5461a7a01f48 Mar 14 06:24:24 crc kubenswrapper[4817]: I0314 06:24:24.789649 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f022c7-6db3-425f-85e7-6c7c46a14b42" path="/var/lib/kubelet/pods/68f022c7-6db3-425f-85e7-6c7c46a14b42/volumes" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.153397 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302385 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302504 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8hwp\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-kube-api-access-m8hwp\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302595 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-scripts\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302671 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-public-tls-certs\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302784 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-logs\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302807 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-ceph\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302824 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-httpd-run\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302845 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-combined-ca-bundle\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.302872 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-config-data\") pod \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\" (UID: \"05c69b85-1847-4e2a-9df8-a3759b0bfcfb\") " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.308597 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.308728 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-ceph" (OuterVolumeSpecName: "ceph") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.311382 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-logs" (OuterVolumeSpecName: "logs") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.313615 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-kube-api-access-m8hwp" (OuterVolumeSpecName: "kube-api-access-m8hwp") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "kube-api-access-m8hwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.315017 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-scripts" (OuterVolumeSpecName: "scripts") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.359269 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-k845r" event={"ID":"2f0b273c-0bba-481a-85db-ce740bae29d2","Type":"ContainerStarted","Data":"cd14afce53734bb19341a38172e96bad1a7f82a150f60f973eae5461a7a01f48"} Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.361372 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebec1c57-27fe-4039-acbc-ddfb06dede94","Type":"ContainerStarted","Data":"7ccd0e1903347a94528e0e0c6aa2a9ee8420a759a5c0d63b1efd6a6d57748a54"} Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.361402 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebec1c57-27fe-4039-acbc-ddfb06dede94","Type":"ContainerStarted","Data":"045a055c19568e5398dbe8647d20afa3f9a50a3176f60650c81bc55cd29b2633"} Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.362974 4817 generic.go:334] "Generic (PLEG): container finished" podID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerID="47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27" exitCode=0 Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.362996 4817 generic.go:334] "Generic (PLEG): container finished" podID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerID="a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d" exitCode=143 Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.363011 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05c69b85-1847-4e2a-9df8-a3759b0bfcfb","Type":"ContainerDied","Data":"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27"} Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.363029 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05c69b85-1847-4e2a-9df8-a3759b0bfcfb","Type":"ContainerDied","Data":"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d"} Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.363040 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05c69b85-1847-4e2a-9df8-a3759b0bfcfb","Type":"ContainerDied","Data":"bdea71543e25283db59fb12d048494123dd002cef524b6a99d895d8fc257940f"} Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.363055 4817 scope.go:117] "RemoveContainer" containerID="47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.363078 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.371734 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.409384 4817 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.409689 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8hwp\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-kube-api-access-m8hwp\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.409708 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.409719 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.409730 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.409741 4817 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.422130 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-config-data" (OuterVolumeSpecName: "config-data") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.432392 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.439458 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05c69b85-1847-4e2a-9df8-a3759b0bfcfb" (UID: "05c69b85-1847-4e2a-9df8-a3759b0bfcfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.467185 4817 scope.go:117] "RemoveContainer" containerID="a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.478507 4817 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.513430 4817 scope.go:117] "RemoveContainer" containerID="47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27" Mar 14 06:24:25 crc kubenswrapper[4817]: E0314 06:24:25.513837 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27\": container with ID starting with 47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27 not found: ID does not exist" containerID="47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.513869 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27"} err="failed to get container status \"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27\": rpc error: code = NotFound desc = could not find container \"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27\": container with ID starting with 47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27 not found: ID does not exist" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.513909 4817 scope.go:117] "RemoveContainer" containerID="a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.514102 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.514137 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.514147 4817 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.514157 4817 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c69b85-1847-4e2a-9df8-a3759b0bfcfb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:25 crc kubenswrapper[4817]: E0314 06:24:25.514348 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d\": container with ID starting with a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d not found: ID does not exist" containerID="a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.514374 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d"} err="failed to get container status \"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d\": rpc error: code = NotFound desc = could not find container \"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d\": container with ID starting with a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d not found: ID does not exist" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.514389 4817 scope.go:117] "RemoveContainer" containerID="47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.515222 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27"} err="failed to get container status \"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27\": rpc error: code = NotFound desc = could not find container \"47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27\": container with ID starting with 47732fcc266c7fedfee637712b4200bc0f3d574d33da779f70fb275df71aee27 not found: ID does not exist" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.515245 4817 scope.go:117] "RemoveContainer" containerID="a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.516289 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d"} err="failed to get container status \"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d\": rpc error: code = NotFound desc = could not find container \"a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d\": container with ID starting with a08a005872e737f47c2513a5cd531d9a211302eae02f41c2b26880c3a70ef83d not found: ID does not exist" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.717309 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.735253 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.744974 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:25 crc kubenswrapper[4817]: E0314 06:24:25.745413 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-httpd" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.745428 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-httpd" Mar 14 06:24:25 crc kubenswrapper[4817]: E0314 06:24:25.745460 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-log" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.745467 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-log" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.745648 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-log" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.745669 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" containerName="glance-httpd" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.746682 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.769382 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.769640 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.770961 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929151 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929224 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b55b75-81af-4e71-8710-7b050784fa23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929262 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-scripts\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929322 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b7b55b75-81af-4e71-8710-7b050784fa23-ceph\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929357 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9r8\" (UniqueName: \"kubernetes.io/projected/b7b55b75-81af-4e71-8710-7b050784fa23-kube-api-access-pd9r8\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929385 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929435 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-config-data\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929491 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b55b75-81af-4e71-8710-7b050784fa23-logs\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:25 crc kubenswrapper[4817]: I0314 06:24:25.929621 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032380 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b55b75-81af-4e71-8710-7b050784fa23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032460 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-scripts\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032520 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b7b55b75-81af-4e71-8710-7b050784fa23-ceph\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032777 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9r8\" (UniqueName: \"kubernetes.io/projected/b7b55b75-81af-4e71-8710-7b050784fa23-kube-api-access-pd9r8\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032811 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032844 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-config-data\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.032883 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b55b75-81af-4e71-8710-7b050784fa23-logs\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.033006 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.033041 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.034150 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.035123 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7b55b75-81af-4e71-8710-7b050784fa23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.035474 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7b55b75-81af-4e71-8710-7b050784fa23-logs\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.040651 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.043676 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-scripts\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.044963 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.046513 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b7b55b75-81af-4e71-8710-7b050784fa23-ceph\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.047496 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b55b75-81af-4e71-8710-7b050784fa23-config-data\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.059906 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9r8\" (UniqueName: \"kubernetes.io/projected/b7b55b75-81af-4e71-8710-7b050784fa23-kube-api-access-pd9r8\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.078588 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"b7b55b75-81af-4e71-8710-7b050784fa23\") " pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.126037 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.377940 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ebec1c57-27fe-4039-acbc-ddfb06dede94","Type":"ContainerStarted","Data":"c53ed0f6e207d150f4918887f64ffa5110236a8855f8810b3ee21dc23888c869"} Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.405840 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.405811396 podStartE2EDuration="3.405811396s" podCreationTimestamp="2026-03-14 06:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:24:26.399822486 +0000 UTC m=+3120.438083242" watchObservedRunningTime="2026-03-14 06:24:26.405811396 +0000 UTC m=+3120.444072142" Mar 14 06:24:26 crc kubenswrapper[4817]: I0314 06:24:26.762562 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c69b85-1847-4e2a-9df8-a3759b0bfcfb" path="/var/lib/kubelet/pods/05c69b85-1847-4e2a-9df8-a3759b0bfcfb/volumes" Mar 14 06:24:28 crc kubenswrapper[4817]: I0314 06:24:28.006801 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Mar 14 06:24:28 crc kubenswrapper[4817]: I0314 06:24:28.103303 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Mar 14 06:24:33 crc kubenswrapper[4817]: I0314 06:24:33.732499 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:24:33 crc kubenswrapper[4817]: E0314 06:24:33.733744 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:24:33 crc kubenswrapper[4817]: I0314 06:24:33.753803 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:33 crc kubenswrapper[4817]: I0314 06:24:33.753878 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:33 crc kubenswrapper[4817]: I0314 06:24:33.812345 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:33 crc kubenswrapper[4817]: I0314 06:24:33.822120 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:33 crc kubenswrapper[4817]: I0314 06:24:33.940608 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 06:24:33 crc kubenswrapper[4817]: W0314 06:24:33.956420 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b55b75_81af_4e71_8710_7b050784fa23.slice/crio-8f8b6322f80553184ecf72e12dfe71b8aca4cb363a492b6f4fd383d697b0a6b3 WatchSource:0}: Error finding container 8f8b6322f80553184ecf72e12dfe71b8aca4cb363a492b6f4fd383d697b0a6b3: Status 404 returned error can't find the container with id 8f8b6322f80553184ecf72e12dfe71b8aca4cb363a492b6f4fd383d697b0a6b3 Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.487866 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4f6d988c-zkbs5" event={"ID":"81ec49d8-d391-457a-ac99-01d35c496fa1","Type":"ContainerStarted","Data":"6fc0228fbf51174f2dac4264de95cfeaac92808a235c7d4d525afa481d8d521d"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.488248 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4f6d988c-zkbs5" event={"ID":"81ec49d8-d391-457a-ac99-01d35c496fa1","Type":"ContainerStarted","Data":"a0617a5a58a94267c95d477de18a8d3b5d7aafac5ac99bd8db506575ae119f7c"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.488136 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d4f6d988c-zkbs5" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon-log" containerID="cri-o://a0617a5a58a94267c95d477de18a8d3b5d7aafac5ac99bd8db506575ae119f7c" gracePeriod=30 Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.488119 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d4f6d988c-zkbs5" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon" containerID="cri-o://6fc0228fbf51174f2dac4264de95cfeaac92808a235c7d4d525afa481d8d521d" gracePeriod=30 Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.494022 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-767cf48f8d-lxbdx" event={"ID":"abfb19f7-bac6-45a5-953e-546d46435171","Type":"ContainerStarted","Data":"83f4e09933d1c00e3aa8ada495b2722bcb405fe4064b3a2f6567e0641cd26bbf"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.494273 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-767cf48f8d-lxbdx" event={"ID":"abfb19f7-bac6-45a5-953e-546d46435171","Type":"ContainerStarted","Data":"58a38db5c1c461dc6bddbc1b86f3e7a31a885d961def728b20b528edec4a2100"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.499998 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc7cb4589-8j695" event={"ID":"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7","Type":"ContainerStarted","Data":"c07a4ec1532ca89b8c7ba85f33d8ea433195ac4bdc848bed55ff73050ee22f20"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.500051 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc7cb4589-8j695" event={"ID":"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7","Type":"ContainerStarted","Data":"264ff85bd6df9898a46db7830d30e910c4265b43f7e2d385d664a0c50e1fc360"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.500188 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fc7cb4589-8j695" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon-log" containerID="cri-o://264ff85bd6df9898a46db7830d30e910c4265b43f7e2d385d664a0c50e1fc360" gracePeriod=30 Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.500299 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fc7cb4589-8j695" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon" containerID="cri-o://c07a4ec1532ca89b8c7ba85f33d8ea433195ac4bdc848bed55ff73050ee22f20" gracePeriod=30 Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.511140 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559cff965b-fnvkc" event={"ID":"56147eb2-93e8-41a7-b6c9-4f40e07263c8","Type":"ContainerStarted","Data":"35b3f963a770a4c10325006a3e9273407c56234a0cf6f85f6388b9bf48b6471e"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.511201 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559cff965b-fnvkc" event={"ID":"56147eb2-93e8-41a7-b6c9-4f40e07263c8","Type":"ContainerStarted","Data":"7ddfc58cb20166fd23d72f3b715cced0d3c58321a3e5caaa45d33f6e8aacc7db"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.514631 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d4f6d988c-zkbs5" podStartSLOduration=2.631599404 podStartE2EDuration="16.514602505s" podCreationTimestamp="2026-03-14 06:24:18 +0000 UTC" firstStartedPulling="2026-03-14 06:24:19.47753256 +0000 UTC m=+3113.515793306" lastFinishedPulling="2026-03-14 06:24:33.360535661 +0000 UTC m=+3127.398796407" observedRunningTime="2026-03-14 06:24:34.509699306 +0000 UTC m=+3128.547960042" watchObservedRunningTime="2026-03-14 06:24:34.514602505 +0000 UTC m=+3128.552863261" Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.526053 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-k845r" event={"ID":"2f0b273c-0bba-481a-85db-ce740bae29d2","Type":"ContainerStarted","Data":"c53cb3f789490cb9ffbbba7a8d2ba4a954ff6ac5287f59f2771f951b4f72d16d"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.530167 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b7b55b75-81af-4e71-8710-7b050784fa23","Type":"ContainerStarted","Data":"8f8b6322f80553184ecf72e12dfe71b8aca4cb363a492b6f4fd383d697b0a6b3"} Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.530215 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.530316 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.563784 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-767cf48f8d-lxbdx" podStartSLOduration=2.608421154 podStartE2EDuration="13.563760789s" podCreationTimestamp="2026-03-14 06:24:21 +0000 UTC" firstStartedPulling="2026-03-14 06:24:22.405201836 +0000 UTC m=+3116.443462582" lastFinishedPulling="2026-03-14 06:24:33.360541481 +0000 UTC m=+3127.398802217" observedRunningTime="2026-03-14 06:24:34.555431953 +0000 UTC m=+3128.593692709" watchObservedRunningTime="2026-03-14 06:24:34.563760789 +0000 UTC m=+3128.602021535" Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.602187 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7fc7cb4589-8j695" podStartSLOduration=2.8206379249999998 podStartE2EDuration="16.602154978s" podCreationTimestamp="2026-03-14 06:24:18 +0000 UTC" firstStartedPulling="2026-03-14 06:24:19.577176276 +0000 UTC m=+3113.615437022" lastFinishedPulling="2026-03-14 06:24:33.358693329 +0000 UTC m=+3127.396954075" observedRunningTime="2026-03-14 06:24:34.596442406 +0000 UTC m=+3128.634703152" watchObservedRunningTime="2026-03-14 06:24:34.602154978 +0000 UTC m=+3128.640415724" Mar 14 06:24:34 crc kubenswrapper[4817]: I0314 06:24:34.638870 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-k845r" podStartSLOduration=2.940362818 podStartE2EDuration="11.638836118s" podCreationTimestamp="2026-03-14 06:24:23 +0000 UTC" firstStartedPulling="2026-03-14 06:24:24.663403969 +0000 UTC m=+3118.701664715" lastFinishedPulling="2026-03-14 06:24:33.361877269 +0000 UTC m=+3127.400138015" observedRunningTime="2026-03-14 06:24:34.620707554 +0000 UTC m=+3128.658968300" watchObservedRunningTime="2026-03-14 06:24:34.638836118 +0000 UTC m=+3128.677096864" Mar 14 06:24:35 crc kubenswrapper[4817]: I0314 06:24:35.538976 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b7b55b75-81af-4e71-8710-7b050784fa23","Type":"ContainerStarted","Data":"a11dace05b2e1d2d4afb36751a1ac255a621c7d893e4a3818c4db7e2f9fe9417"} Mar 14 06:24:35 crc kubenswrapper[4817]: I0314 06:24:35.539509 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b7b55b75-81af-4e71-8710-7b050784fa23","Type":"ContainerStarted","Data":"e650890baf829f588dfb9b2f98e784b89cbc7bc6795a52c7a3798a351c9fc777"} Mar 14 06:24:35 crc kubenswrapper[4817]: I0314 06:24:35.564976 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-559cff965b-fnvkc" podStartSLOduration=4.328558654 podStartE2EDuration="15.564954029s" podCreationTimestamp="2026-03-14 06:24:20 +0000 UTC" firstStartedPulling="2026-03-14 06:24:22.127338677 +0000 UTC m=+3116.165599413" lastFinishedPulling="2026-03-14 06:24:33.363734052 +0000 UTC m=+3127.401994788" observedRunningTime="2026-03-14 06:24:34.677815294 +0000 UTC m=+3128.716076060" watchObservedRunningTime="2026-03-14 06:24:35.564954029 +0000 UTC m=+3129.603214775" Mar 14 06:24:35 crc kubenswrapper[4817]: I0314 06:24:35.572476 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.572456022 podStartE2EDuration="10.572456022s" podCreationTimestamp="2026-03-14 06:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:24:35.563459517 +0000 UTC m=+3129.601720263" watchObservedRunningTime="2026-03-14 06:24:35.572456022 +0000 UTC m=+3129.610716768" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.126326 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.126397 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.162708 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.171299 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.549626 4817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.549661 4817 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.550844 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 06:24:36 crc kubenswrapper[4817]: I0314 06:24:36.550918 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 06:24:37 crc kubenswrapper[4817]: I0314 06:24:37.258580 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:37 crc kubenswrapper[4817]: I0314 06:24:37.259428 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 06:24:38 crc kubenswrapper[4817]: I0314 06:24:38.586810 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:24:38 crc kubenswrapper[4817]: I0314 06:24:38.727060 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:24:41 crc kubenswrapper[4817]: I0314 06:24:41.402824 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:41 crc kubenswrapper[4817]: I0314 06:24:41.404109 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:24:41 crc kubenswrapper[4817]: I0314 06:24:41.721665 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:41 crc kubenswrapper[4817]: I0314 06:24:41.721765 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:24:45 crc kubenswrapper[4817]: I0314 06:24:45.674277 4817 scope.go:117] "RemoveContainer" containerID="42b056da6dd99e06e19ebb353029d0ff54b02b1f982f84c358b0a02ddfaba5ba" Mar 14 06:24:46 crc kubenswrapper[4817]: I0314 06:24:46.741293 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:24:46 crc kubenswrapper[4817]: E0314 06:24:46.741832 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:24:50 crc kubenswrapper[4817]: I0314 06:24:50.738088 4817 generic.go:334] "Generic (PLEG): container finished" podID="2f0b273c-0bba-481a-85db-ce740bae29d2" containerID="c53cb3f789490cb9ffbbba7a8d2ba4a954ff6ac5287f59f2771f951b4f72d16d" exitCode=0 Mar 14 06:24:50 crc kubenswrapper[4817]: I0314 06:24:50.748417 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-k845r" event={"ID":"2f0b273c-0bba-481a-85db-ce740bae29d2","Type":"ContainerDied","Data":"c53cb3f789490cb9ffbbba7a8d2ba4a954ff6ac5287f59f2771f951b4f72d16d"} Mar 14 06:24:51 crc kubenswrapper[4817]: I0314 06:24:51.405582 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-559cff965b-fnvkc" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.7:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.7:8443: connect: connection refused" Mar 14 06:24:51 crc kubenswrapper[4817]: I0314 06:24:51.721096 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-767cf48f8d-lxbdx" podUID="abfb19f7-bac6-45a5-953e-546d46435171" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.8:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.8:8443: connect: connection refused" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.213003 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-k845r" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.368433 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4ljx\" (UniqueName: \"kubernetes.io/projected/2f0b273c-0bba-481a-85db-ce740bae29d2-kube-api-access-b4ljx\") pod \"2f0b273c-0bba-481a-85db-ce740bae29d2\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.368517 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-config-data\") pod \"2f0b273c-0bba-481a-85db-ce740bae29d2\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.368622 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-combined-ca-bundle\") pod \"2f0b273c-0bba-481a-85db-ce740bae29d2\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.369338 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-job-config-data\") pod \"2f0b273c-0bba-481a-85db-ce740bae29d2\" (UID: \"2f0b273c-0bba-481a-85db-ce740bae29d2\") " Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.375712 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0b273c-0bba-481a-85db-ce740bae29d2-kube-api-access-b4ljx" (OuterVolumeSpecName: "kube-api-access-b4ljx") pod "2f0b273c-0bba-481a-85db-ce740bae29d2" (UID: "2f0b273c-0bba-481a-85db-ce740bae29d2"). InnerVolumeSpecName "kube-api-access-b4ljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.376509 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "2f0b273c-0bba-481a-85db-ce740bae29d2" (UID: "2f0b273c-0bba-481a-85db-ce740bae29d2"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.379250 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-config-data" (OuterVolumeSpecName: "config-data") pod "2f0b273c-0bba-481a-85db-ce740bae29d2" (UID: "2f0b273c-0bba-481a-85db-ce740bae29d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.403249 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f0b273c-0bba-481a-85db-ce740bae29d2" (UID: "2f0b273c-0bba-481a-85db-ce740bae29d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.473041 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4ljx\" (UniqueName: \"kubernetes.io/projected/2f0b273c-0bba-481a-85db-ce740bae29d2-kube-api-access-b4ljx\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.473539 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.473599 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.473612 4817 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f0b273c-0bba-481a-85db-ce740bae29d2-job-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.769503 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-k845r" event={"ID":"2f0b273c-0bba-481a-85db-ce740bae29d2","Type":"ContainerDied","Data":"cd14afce53734bb19341a38172e96bad1a7f82a150f60f973eae5461a7a01f48"} Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.769560 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd14afce53734bb19341a38172e96bad1a7f82a150f60f973eae5461a7a01f48" Mar 14 06:24:52 crc kubenswrapper[4817]: I0314 06:24:52.769671 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-k845r" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.146143 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:24:53 crc kubenswrapper[4817]: E0314 06:24:53.146607 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0b273c-0bba-481a-85db-ce740bae29d2" containerName="manila-db-sync" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.146627 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0b273c-0bba-481a-85db-ce740bae29d2" containerName="manila-db-sync" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.146837 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0b273c-0bba-481a-85db-ce740bae29d2" containerName="manila-db-sync" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.147977 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.153134 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.153925 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.168646 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.168802 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-dtvxp" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.171469 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.287712 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.289470 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.292623 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.292699 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.292809 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.292842 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctznb\" (UniqueName: \"kubernetes.io/projected/a44876b2-3994-4528-90b5-98f450b7592e-kube-api-access-ctznb\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.292954 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-scripts\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.293062 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a44876b2-3994-4528-90b5-98f450b7592e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.295977 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.306250 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.375017 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-rc9hl"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.377209 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395122 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a44876b2-3994-4528-90b5-98f450b7592e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395218 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395284 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395319 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395344 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395385 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395410 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ftr\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-kube-api-access-j5ftr\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395444 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395473 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-scripts\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395495 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-ceph\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395541 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395578 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctznb\" (UniqueName: \"kubernetes.io/projected/a44876b2-3994-4528-90b5-98f450b7592e-kube-api-access-ctznb\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395643 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-scripts\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395673 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.395802 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a44876b2-3994-4528-90b5-98f450b7592e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.409973 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.419837 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.423129 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-scripts\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.423205 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-rc9hl"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.435405 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctznb\" (UniqueName: \"kubernetes.io/projected/a44876b2-3994-4528-90b5-98f450b7592e-kube-api-access-ctznb\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.436477 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.464526 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.497978 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498082 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498122 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498168 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498200 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498227 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5ftr\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-kube-api-access-j5ftr\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498274 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-scripts\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498301 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-ceph\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498347 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-config\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498408 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffwl\" (UniqueName: \"kubernetes.io/projected/5405820a-1727-4506-aeef-6c081ec11d88-kube-api-access-xffwl\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498461 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498502 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498529 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498555 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.498714 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.499602 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.505429 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.509910 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.516594 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.517556 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-scripts\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.536642 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-ceph\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.536646 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5ftr\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-kube-api-access-j5ftr\") pod \"manila-share-share1-0\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.605487 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.605561 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.605593 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.605753 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.605824 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-config\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.605881 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xffwl\" (UniqueName: \"kubernetes.io/projected/5405820a-1727-4506-aeef-6c081ec11d88-kube-api-access-xffwl\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.607784 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.607912 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-config\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.607953 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.608484 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.611367 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5405820a-1727-4506-aeef-6c081ec11d88-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.623426 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.642175 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xffwl\" (UniqueName: \"kubernetes.io/projected/5405820a-1727-4506-aeef-6c081ec11d88-kube-api-access-xffwl\") pod \"dnsmasq-dns-69655fd4bf-rc9hl\" (UID: \"5405820a-1727-4506-aeef-6c081ec11d88\") " pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.644806 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.713916 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.716341 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.722411 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.739962 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.815415 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.815683 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-scripts\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.815807 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvl8\" (UniqueName: \"kubernetes.io/projected/735e0072-d643-4e67-bd2b-03bbd5683711-kube-api-access-pgvl8\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.815972 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735e0072-d643-4e67-bd2b-03bbd5683711-logs\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.816008 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data-custom\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.816045 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.816072 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/735e0072-d643-4e67-bd2b-03bbd5683711-etc-machine-id\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918712 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-scripts\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918770 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvl8\" (UniqueName: \"kubernetes.io/projected/735e0072-d643-4e67-bd2b-03bbd5683711-kube-api-access-pgvl8\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918809 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735e0072-d643-4e67-bd2b-03bbd5683711-logs\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918832 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data-custom\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918852 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/735e0072-d643-4e67-bd2b-03bbd5683711-etc-machine-id\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918867 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.918978 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.919312 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735e0072-d643-4e67-bd2b-03bbd5683711-logs\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.920197 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/735e0072-d643-4e67-bd2b-03bbd5683711-etc-machine-id\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.928731 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-scripts\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.931252 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.935454 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.936160 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data-custom\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:53 crc kubenswrapper[4817]: I0314 06:24:53.939453 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvl8\" (UniqueName: \"kubernetes.io/projected/735e0072-d643-4e67-bd2b-03bbd5683711-kube-api-access-pgvl8\") pod \"manila-api-0\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " pod="openstack/manila-api-0" Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.044486 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.137185 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.378557 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-rc9hl"] Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.502313 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.812550 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a44876b2-3994-4528-90b5-98f450b7592e","Type":"ContainerStarted","Data":"8609da3544159220e42fb6353e8aa1756166d108c35782636eaf17924d360918"} Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.821313 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" event={"ID":"5405820a-1727-4506-aeef-6c081ec11d88","Type":"ContainerStarted","Data":"b00b94da305ca25bfee4efc3e1cfda6915378a59763671291f8819a00e7af596"} Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.821372 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" event={"ID":"5405820a-1727-4506-aeef-6c081ec11d88","Type":"ContainerStarted","Data":"7a56d3a5e5492402a2e3c8b2e59a2e88bd15b30313e7a62fe98e2de5721632af"} Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.824883 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2adc02b-dfa9-4679-9c32-e127baabf2ad","Type":"ContainerStarted","Data":"8f063a7725dc856062469ad225029ce0f5f1d860f22c47eef00d38311e37ab6c"} Mar 14 06:24:54 crc kubenswrapper[4817]: I0314 06:24:54.881032 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:54 crc kubenswrapper[4817]: W0314 06:24:54.900577 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod735e0072_d643_4e67_bd2b_03bbd5683711.slice/crio-aae0d585b329da7bc0c42efc5a5cbcf8ca5831721ca5610246711a43b907d853 WatchSource:0}: Error finding container aae0d585b329da7bc0c42efc5a5cbcf8ca5831721ca5610246711a43b907d853: Status 404 returned error can't find the container with id aae0d585b329da7bc0c42efc5a5cbcf8ca5831721ca5610246711a43b907d853 Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.883450 4817 generic.go:334] "Generic (PLEG): container finished" podID="5405820a-1727-4506-aeef-6c081ec11d88" containerID="b00b94da305ca25bfee4efc3e1cfda6915378a59763671291f8819a00e7af596" exitCode=0 Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.883934 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" event={"ID":"5405820a-1727-4506-aeef-6c081ec11d88","Type":"ContainerDied","Data":"b00b94da305ca25bfee4efc3e1cfda6915378a59763671291f8819a00e7af596"} Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.883966 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" event={"ID":"5405820a-1727-4506-aeef-6c081ec11d88","Type":"ContainerStarted","Data":"3648872ba47a41bd78a6cf4e70372feb253cf8eacbd50f468d098bae5df1bfa7"} Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.887287 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.903306 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"735e0072-d643-4e67-bd2b-03bbd5683711","Type":"ContainerStarted","Data":"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace"} Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.903367 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"735e0072-d643-4e67-bd2b-03bbd5683711","Type":"ContainerStarted","Data":"aae0d585b329da7bc0c42efc5a5cbcf8ca5831721ca5610246711a43b907d853"} Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.915049 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" podStartSLOduration=2.915020647 podStartE2EDuration="2.915020647s" podCreationTimestamp="2026-03-14 06:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:24:55.907188145 +0000 UTC m=+3149.945448891" watchObservedRunningTime="2026-03-14 06:24:55.915020647 +0000 UTC m=+3149.953281393" Mar 14 06:24:55 crc kubenswrapper[4817]: I0314 06:24:55.920710 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a44876b2-3994-4528-90b5-98f450b7592e","Type":"ContainerStarted","Data":"903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc"} Mar 14 06:24:56 crc kubenswrapper[4817]: I0314 06:24:56.709110 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:56 crc kubenswrapper[4817]: I0314 06:24:56.956917 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"735e0072-d643-4e67-bd2b-03bbd5683711","Type":"ContainerStarted","Data":"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c"} Mar 14 06:24:56 crc kubenswrapper[4817]: I0314 06:24:56.957350 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Mar 14 06:24:56 crc kubenswrapper[4817]: I0314 06:24:56.975126 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a44876b2-3994-4528-90b5-98f450b7592e","Type":"ContainerStarted","Data":"f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065"} Mar 14 06:24:56 crc kubenswrapper[4817]: I0314 06:24:56.996750 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.99672052 podStartE2EDuration="3.99672052s" podCreationTimestamp="2026-03-14 06:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:24:56.982711773 +0000 UTC m=+3151.020972519" watchObservedRunningTime="2026-03-14 06:24:56.99672052 +0000 UTC m=+3151.034981266" Mar 14 06:24:57 crc kubenswrapper[4817]: I0314 06:24:57.013675 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.295449695 podStartE2EDuration="4.0136531s" podCreationTimestamp="2026-03-14 06:24:53 +0000 UTC" firstStartedPulling="2026-03-14 06:24:54.192087082 +0000 UTC m=+3148.230347838" lastFinishedPulling="2026-03-14 06:24:54.910290487 +0000 UTC m=+3148.948551243" observedRunningTime="2026-03-14 06:24:57.007837255 +0000 UTC m=+3151.046098021" watchObservedRunningTime="2026-03-14 06:24:57.0136531 +0000 UTC m=+3151.051913846" Mar 14 06:24:57 crc kubenswrapper[4817]: I0314 06:24:57.984258 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api-log" containerID="cri-o://c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace" gracePeriod=30 Mar 14 06:24:57 crc kubenswrapper[4817]: I0314 06:24:57.984340 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api" containerID="cri-o://0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c" gracePeriod=30 Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.723089 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.881685 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-scripts\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.881771 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-combined-ca-bundle\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.881821 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data-custom\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.881955 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgvl8\" (UniqueName: \"kubernetes.io/projected/735e0072-d643-4e67-bd2b-03bbd5683711-kube-api-access-pgvl8\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.881995 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.882174 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735e0072-d643-4e67-bd2b-03bbd5683711-logs\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.882202 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/735e0072-d643-4e67-bd2b-03bbd5683711-etc-machine-id\") pod \"735e0072-d643-4e67-bd2b-03bbd5683711\" (UID: \"735e0072-d643-4e67-bd2b-03bbd5683711\") " Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.883174 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735e0072-d643-4e67-bd2b-03bbd5683711-logs" (OuterVolumeSpecName: "logs") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.883606 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/735e0072-d643-4e67-bd2b-03bbd5683711-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.894346 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-scripts" (OuterVolumeSpecName: "scripts") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.894559 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735e0072-d643-4e67-bd2b-03bbd5683711-kube-api-access-pgvl8" (OuterVolumeSpecName: "kube-api-access-pgvl8") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "kube-api-access-pgvl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.894647 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.995246 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.995331 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgvl8\" (UniqueName: \"kubernetes.io/projected/735e0072-d643-4e67-bd2b-03bbd5683711-kube-api-access-pgvl8\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.995347 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/735e0072-d643-4e67-bd2b-03bbd5683711-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.995357 4817 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/735e0072-d643-4e67-bd2b-03bbd5683711-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.995365 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:58 crc kubenswrapper[4817]: I0314 06:24:58.995375 4817 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.011106 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data" (OuterVolumeSpecName: "config-data") pod "735e0072-d643-4e67-bd2b-03bbd5683711" (UID: "735e0072-d643-4e67-bd2b-03bbd5683711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024031 4817 generic.go:334] "Generic (PLEG): container finished" podID="735e0072-d643-4e67-bd2b-03bbd5683711" containerID="0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c" exitCode=0 Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024069 4817 generic.go:334] "Generic (PLEG): container finished" podID="735e0072-d643-4e67-bd2b-03bbd5683711" containerID="c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace" exitCode=143 Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024092 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"735e0072-d643-4e67-bd2b-03bbd5683711","Type":"ContainerDied","Data":"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c"} Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024132 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"735e0072-d643-4e67-bd2b-03bbd5683711","Type":"ContainerDied","Data":"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace"} Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024142 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"735e0072-d643-4e67-bd2b-03bbd5683711","Type":"ContainerDied","Data":"aae0d585b329da7bc0c42efc5a5cbcf8ca5831721ca5610246711a43b907d853"} Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024162 4817 scope.go:117] "RemoveContainer" containerID="0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.024326 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.070354 4817 scope.go:117] "RemoveContainer" containerID="c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.086449 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.112034 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.112083 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735e0072-d643-4e67-bd2b-03bbd5683711-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.112119 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.126270 4817 scope.go:117] "RemoveContainer" containerID="0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c" Mar 14 06:24:59 crc kubenswrapper[4817]: E0314 06:24:59.132627 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c\": container with ID starting with 0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c not found: ID does not exist" containerID="0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.132692 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c"} err="failed to get container status \"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c\": rpc error: code = NotFound desc = could not find container \"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c\": container with ID starting with 0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c not found: ID does not exist" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.132740 4817 scope.go:117] "RemoveContainer" containerID="c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace" Mar 14 06:24:59 crc kubenswrapper[4817]: E0314 06:24:59.133913 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace\": container with ID starting with c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace not found: ID does not exist" containerID="c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.133935 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace"} err="failed to get container status \"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace\": rpc error: code = NotFound desc = could not find container \"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace\": container with ID starting with c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace not found: ID does not exist" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.133948 4817 scope.go:117] "RemoveContainer" containerID="0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.134472 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c"} err="failed to get container status \"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c\": rpc error: code = NotFound desc = could not find container \"0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c\": container with ID starting with 0a83da29f2be040a7c1f9c1768c5ed0a60fd4476fb1fd89e70f9b0e9f051675c not found: ID does not exist" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.134492 4817 scope.go:117] "RemoveContainer" containerID="c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.134882 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace"} err="failed to get container status \"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace\": rpc error: code = NotFound desc = could not find container \"c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace\": container with ID starting with c5500c5631b3d6564c24d16a1e65f2963485732698721031dec4a40e31658ace not found: ID does not exist" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.138104 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:59 crc kubenswrapper[4817]: E0314 06:24:59.138867 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api-log" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.138912 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api-log" Mar 14 06:24:59 crc kubenswrapper[4817]: E0314 06:24:59.138995 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.139325 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.139691 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api-log" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.139722 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" containerName="manila-api" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.141703 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.144269 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.144932 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.150560 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.152148 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.183163 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.187401 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.317735 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.317788 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxmf\" (UniqueName: \"kubernetes.io/projected/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-kube-api-access-mxxmf\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.317906 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-scripts\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.317986 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-config-data-custom\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.318010 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.318049 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-config-data\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.318070 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-logs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.318100 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-public-tls-certs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.318139 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-etc-machine-id\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422425 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-config-data-custom\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422481 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422508 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-config-data\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422532 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-logs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422554 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-public-tls-certs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422583 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-etc-machine-id\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422633 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422655 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxmf\" (UniqueName: \"kubernetes.io/projected/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-kube-api-access-mxxmf\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.422719 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-scripts\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.423238 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-etc-machine-id\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.423422 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-logs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.428746 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-scripts\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.434955 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-config-data-custom\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.435413 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-config-data\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.436233 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.443831 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-internal-tls-certs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.453456 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-public-tls-certs\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.453710 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxmf\" (UniqueName: \"kubernetes.io/projected/f23f27ea-4a4a-44ca-9b1c-457e8e4e397a-kube-api-access-mxxmf\") pod \"manila-api-0\" (UID: \"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a\") " pod="openstack/manila-api-0" Mar 14 06:24:59 crc kubenswrapper[4817]: I0314 06:24:59.499191 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 14 06:25:00 crc kubenswrapper[4817]: I0314 06:25:00.126355 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 14 06:25:00 crc kubenswrapper[4817]: I0314 06:25:00.745566 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735e0072-d643-4e67-bd2b-03bbd5683711" path="/var/lib/kubelet/pods/735e0072-d643-4e67-bd2b-03bbd5683711/volumes" Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.066145 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a","Type":"ContainerStarted","Data":"7381d7b61d8921a0dc2a6384dc96ae66d21f3d637ae7be3f1cfff5be5541e9bf"} Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.066496 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a","Type":"ContainerStarted","Data":"0c0289358d8e8d9a408ad09575706ad39cf90c4265f008f278c0b3b77a95a6ac"} Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.134640 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.135079 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-central-agent" containerID="cri-o://9998e0aaa0f4549d9f30f501352b9a600999db9deb43bc866d4789909e5ef60c" gracePeriod=30 Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.135170 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="proxy-httpd" containerID="cri-o://203c4234bf50ffef3cbcd5c20d318313984a98c3213c7960188cd537798bba74" gracePeriod=30 Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.135170 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-notification-agent" containerID="cri-o://4c588abde7ecaea57d824b62ede42d3b5433ccc584206a55a3b15187c891a59a" gracePeriod=30 Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.135174 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="sg-core" containerID="cri-o://15fab14509bd8b37d8649d0d4e30717dbe50bc917695e764a7e241d22cb0cff3" gracePeriod=30 Mar 14 06:25:01 crc kubenswrapper[4817]: I0314 06:25:01.732848 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:25:01 crc kubenswrapper[4817]: E0314 06:25:01.733425 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.114753 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"f23f27ea-4a4a-44ca-9b1c-457e8e4e397a","Type":"ContainerStarted","Data":"10984d14f2512c83e0897d315d35ee4823ecb34c923f1c66e1e874c9f920fd7b"} Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.115367 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.120837 4817 generic.go:334] "Generic (PLEG): container finished" podID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerID="203c4234bf50ffef3cbcd5c20d318313984a98c3213c7960188cd537798bba74" exitCode=0 Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.120882 4817 generic.go:334] "Generic (PLEG): container finished" podID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerID="15fab14509bd8b37d8649d0d4e30717dbe50bc917695e764a7e241d22cb0cff3" exitCode=2 Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.120928 4817 generic.go:334] "Generic (PLEG): container finished" podID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerID="4c588abde7ecaea57d824b62ede42d3b5433ccc584206a55a3b15187c891a59a" exitCode=0 Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.120940 4817 generic.go:334] "Generic (PLEG): container finished" podID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerID="9998e0aaa0f4549d9f30f501352b9a600999db9deb43bc866d4789909e5ef60c" exitCode=0 Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.120968 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerDied","Data":"203c4234bf50ffef3cbcd5c20d318313984a98c3213c7960188cd537798bba74"} Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.121001 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerDied","Data":"15fab14509bd8b37d8649d0d4e30717dbe50bc917695e764a7e241d22cb0cff3"} Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.121015 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerDied","Data":"4c588abde7ecaea57d824b62ede42d3b5433ccc584206a55a3b15187c891a59a"} Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.121026 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerDied","Data":"9998e0aaa0f4549d9f30f501352b9a600999db9deb43bc866d4789909e5ef60c"} Mar 14 06:25:02 crc kubenswrapper[4817]: I0314 06:25:02.148392 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.148367216 podStartE2EDuration="3.148367216s" podCreationTimestamp="2026-03-14 06:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:25:02.136023286 +0000 UTC m=+3156.174284062" watchObservedRunningTime="2026-03-14 06:25:02.148367216 +0000 UTC m=+3156.186627972" Mar 14 06:25:03 crc kubenswrapper[4817]: I0314 06:25:03.465857 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 14 06:25:03 crc kubenswrapper[4817]: I0314 06:25:03.647066 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-rc9hl" Mar 14 06:25:03 crc kubenswrapper[4817]: I0314 06:25:03.715797 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-7p784"] Mar 14 06:25:03 crc kubenswrapper[4817]: I0314 06:25:03.716168 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" containerName="dnsmasq-dns" containerID="cri-o://3965b2739a25882622ae903fd3aa0e48df1d74dad558117e1161ed1c5de16668" gracePeriod=10 Mar 14 06:25:04 crc kubenswrapper[4817]: I0314 06:25:04.146141 4817 generic.go:334] "Generic (PLEG): container finished" podID="ca487333-68c9-470e-b299-c8331d9b59b6" containerID="3965b2739a25882622ae903fd3aa0e48df1d74dad558117e1161ed1c5de16668" exitCode=0 Mar 14 06:25:04 crc kubenswrapper[4817]: I0314 06:25:04.146189 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" event={"ID":"ca487333-68c9-470e-b299-c8331d9b59b6","Type":"ContainerDied","Data":"3965b2739a25882622ae903fd3aa0e48df1d74dad558117e1161ed1c5de16668"} Mar 14 06:25:04 crc kubenswrapper[4817]: I0314 06:25:04.827287 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:25:04 crc kubenswrapper[4817]: I0314 06:25:04.849946 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.131869 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.174626 4817 generic.go:334] "Generic (PLEG): container finished" podID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerID="6fc0228fbf51174f2dac4264de95cfeaac92808a235c7d4d525afa481d8d521d" exitCode=137 Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.174661 4817 generic.go:334] "Generic (PLEG): container finished" podID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerID="a0617a5a58a94267c95d477de18a8d3b5d7aafac5ac99bd8db506575ae119f7c" exitCode=137 Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.174711 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4f6d988c-zkbs5" event={"ID":"81ec49d8-d391-457a-ac99-01d35c496fa1","Type":"ContainerDied","Data":"6fc0228fbf51174f2dac4264de95cfeaac92808a235c7d4d525afa481d8d521d"} Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.174743 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4f6d988c-zkbs5" event={"ID":"81ec49d8-d391-457a-ac99-01d35c496fa1","Type":"ContainerDied","Data":"a0617a5a58a94267c95d477de18a8d3b5d7aafac5ac99bd8db506575ae119f7c"} Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.180988 4817 generic.go:334] "Generic (PLEG): container finished" podID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerID="c07a4ec1532ca89b8c7ba85f33d8ea433195ac4bdc848bed55ff73050ee22f20" exitCode=137 Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.181037 4817 generic.go:334] "Generic (PLEG): container finished" podID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerID="264ff85bd6df9898a46db7830d30e910c4265b43f7e2d385d664a0c50e1fc360" exitCode=137 Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.181057 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc7cb4589-8j695" event={"ID":"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7","Type":"ContainerDied","Data":"c07a4ec1532ca89b8c7ba85f33d8ea433195ac4bdc848bed55ff73050ee22f20"} Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.181097 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc7cb4589-8j695" event={"ID":"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7","Type":"ContainerDied","Data":"264ff85bd6df9898a46db7830d30e910c4265b43f7e2d385d664a0c50e1fc360"} Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.183229 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" event={"ID":"ca487333-68c9-470e-b299-c8331d9b59b6","Type":"ContainerDied","Data":"b4a7eee2919098274e07f69a96406263905fc0867bfef3f230f88741b88a60fa"} Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.183275 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-7p784" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.183291 4817 scope.go:117] "RemoveContainer" containerID="3965b2739a25882622ae903fd3aa0e48df1d74dad558117e1161ed1c5de16668" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.219081 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-sb\") pod \"ca487333-68c9-470e-b299-c8331d9b59b6\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.219336 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-dns-svc\") pod \"ca487333-68c9-470e-b299-c8331d9b59b6\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.219418 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-openstack-edpm-ipam\") pod \"ca487333-68c9-470e-b299-c8331d9b59b6\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.219541 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-nb\") pod \"ca487333-68c9-470e-b299-c8331d9b59b6\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.219621 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-config\") pod \"ca487333-68c9-470e-b299-c8331d9b59b6\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.219655 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcpxv\" (UniqueName: \"kubernetes.io/projected/ca487333-68c9-470e-b299-c8331d9b59b6-kube-api-access-mcpxv\") pod \"ca487333-68c9-470e-b299-c8331d9b59b6\" (UID: \"ca487333-68c9-470e-b299-c8331d9b59b6\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.230155 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca487333-68c9-470e-b299-c8331d9b59b6-kube-api-access-mcpxv" (OuterVolumeSpecName: "kube-api-access-mcpxv") pod "ca487333-68c9-470e-b299-c8331d9b59b6" (UID: "ca487333-68c9-470e-b299-c8331d9b59b6"). InnerVolumeSpecName "kube-api-access-mcpxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.276343 4817 scope.go:117] "RemoveContainer" containerID="a6a373c7926c77256e248e3649be47eeaec888e378721e3693a632720e3ac6dd" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.336315 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcpxv\" (UniqueName: \"kubernetes.io/projected/ca487333-68c9-470e-b299-c8331d9b59b6-kube-api-access-mcpxv\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.370622 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.392961 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-config" (OuterVolumeSpecName: "config") pod "ca487333-68c9-470e-b299-c8331d9b59b6" (UID: "ca487333-68c9-470e-b299-c8331d9b59b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.404173 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca487333-68c9-470e-b299-c8331d9b59b6" (UID: "ca487333-68c9-470e-b299-c8331d9b59b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.428648 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ca487333-68c9-470e-b299-c8331d9b59b6" (UID: "ca487333-68c9-470e-b299-c8331d9b59b6"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.436749 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.439181 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-logs\") pod \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.439331 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-scripts\") pod \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.439940 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-horizon-secret-key\") pod \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.440020 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-config-data\") pod \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.440109 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65lhq\" (UniqueName: \"kubernetes.io/projected/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-kube-api-access-65lhq\") pod \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\" (UID: \"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.440956 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.440972 4817 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-config\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.440983 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.442197 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca487333-68c9-470e-b299-c8331d9b59b6" (UID: "ca487333-68c9-470e-b299-c8331d9b59b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.443634 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-logs" (OuterVolumeSpecName: "logs") pod "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" (UID: "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.444573 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-kube-api-access-65lhq" (OuterVolumeSpecName: "kube-api-access-65lhq") pod "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" (UID: "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7"). InnerVolumeSpecName "kube-api-access-65lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.445035 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.445873 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" (UID: "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.454479 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca487333-68c9-470e-b299-c8331d9b59b6" (UID: "ca487333-68c9-470e-b299-c8331d9b59b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.493049 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-scripts" (OuterVolumeSpecName: "scripts") pod "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" (UID: "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.528558 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-config-data" (OuterVolumeSpecName: "config-data") pod "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" (UID: "e9d2f538-a74e-441c-9406-fe1d3ef5b8c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543175 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-config-data\") pod \"81ec49d8-d391-457a-ac99-01d35c496fa1\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543236 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-scripts\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543341 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ec49d8-d391-457a-ac99-01d35c496fa1-logs\") pod \"81ec49d8-d391-457a-ac99-01d35c496fa1\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543371 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bltf\" (UniqueName: \"kubernetes.io/projected/35491618-81a4-4f75-927f-6b6a3d0c9ce2-kube-api-access-8bltf\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543399 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-ceilometer-tls-certs\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543434 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-scripts\") pod \"81ec49d8-d391-457a-ac99-01d35c496fa1\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543472 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-sg-core-conf-yaml\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543492 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-config-data\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543545 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-combined-ca-bundle\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543583 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-run-httpd\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543628 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81ec49d8-d391-457a-ac99-01d35c496fa1-horizon-secret-key\") pod \"81ec49d8-d391-457a-ac99-01d35c496fa1\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543669 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-log-httpd\") pod \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\" (UID: \"35491618-81a4-4f75-927f-6b6a3d0c9ce2\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.543700 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprgp\" (UniqueName: \"kubernetes.io/projected/81ec49d8-d391-457a-ac99-01d35c496fa1-kube-api-access-zprgp\") pod \"81ec49d8-d391-457a-ac99-01d35c496fa1\" (UID: \"81ec49d8-d391-457a-ac99-01d35c496fa1\") " Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544461 4817 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544488 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544503 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65lhq\" (UniqueName: \"kubernetes.io/projected/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-kube-api-access-65lhq\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544517 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544529 4817 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544542 4817 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca487333-68c9-470e-b299-c8331d9b59b6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.544553 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.545182 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.545481 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ec49d8-d391-457a-ac99-01d35c496fa1-logs" (OuterVolumeSpecName: "logs") pod "81ec49d8-d391-457a-ac99-01d35c496fa1" (UID: "81ec49d8-d391-457a-ac99-01d35c496fa1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.547868 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.549050 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35491618-81a4-4f75-927f-6b6a3d0c9ce2-kube-api-access-8bltf" (OuterVolumeSpecName: "kube-api-access-8bltf") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "kube-api-access-8bltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.549228 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-7p784"] Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.551667 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ec49d8-d391-457a-ac99-01d35c496fa1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "81ec49d8-d391-457a-ac99-01d35c496fa1" (UID: "81ec49d8-d391-457a-ac99-01d35c496fa1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.554997 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ec49d8-d391-457a-ac99-01d35c496fa1-kube-api-access-zprgp" (OuterVolumeSpecName: "kube-api-access-zprgp") pod "81ec49d8-d391-457a-ac99-01d35c496fa1" (UID: "81ec49d8-d391-457a-ac99-01d35c496fa1"). InnerVolumeSpecName "kube-api-access-zprgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.564107 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-scripts" (OuterVolumeSpecName: "scripts") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.582990 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.598094 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-7p784"] Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.610200 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-config-data" (OuterVolumeSpecName: "config-data") pod "81ec49d8-d391-457a-ac99-01d35c496fa1" (UID: "81ec49d8-d391-457a-ac99-01d35c496fa1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.624928 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-scripts" (OuterVolumeSpecName: "scripts") pod "81ec49d8-d391-457a-ac99-01d35c496fa1" (UID: "81ec49d8-d391-457a-ac99-01d35c496fa1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.634522 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646636 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprgp\" (UniqueName: \"kubernetes.io/projected/81ec49d8-d391-457a-ac99-01d35c496fa1-kube-api-access-zprgp\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646676 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646687 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646697 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ec49d8-d391-457a-ac99-01d35c496fa1-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646707 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bltf\" (UniqueName: \"kubernetes.io/projected/35491618-81a4-4f75-927f-6b6a3d0c9ce2-kube-api-access-8bltf\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646715 4817 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646727 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81ec49d8-d391-457a-ac99-01d35c496fa1-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646736 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646744 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646752 4817 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81ec49d8-d391-457a-ac99-01d35c496fa1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.646760 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/35491618-81a4-4f75-927f-6b6a3d0c9ce2-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.665227 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.706043 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-config-data" (OuterVolumeSpecName: "config-data") pod "35491618-81a4-4f75-927f-6b6a3d0c9ce2" (UID: "35491618-81a4-4f75-927f-6b6a3d0c9ce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.748642 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:05 crc kubenswrapper[4817]: I0314 06:25:05.748696 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35491618-81a4-4f75-927f-6b6a3d0c9ce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.198838 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4f6d988c-zkbs5" event={"ID":"81ec49d8-d391-457a-ac99-01d35c496fa1","Type":"ContainerDied","Data":"f5a9ed74b5a454da4aa0c48ebf419a36e1adf7be6439d835f7ae05cdebcd3535"} Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.199306 4817 scope.go:117] "RemoveContainer" containerID="6fc0228fbf51174f2dac4264de95cfeaac92808a235c7d4d525afa481d8d521d" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.199216 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4f6d988c-zkbs5" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.203243 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2adc02b-dfa9-4679-9c32-e127baabf2ad","Type":"ContainerStarted","Data":"377faf4ec05d73d821d0852c00eca9c7d52c9fdfdf673eb41091095d63dd070b"} Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.208245 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc7cb4589-8j695" event={"ID":"e9d2f538-a74e-441c-9406-fe1d3ef5b8c7","Type":"ContainerDied","Data":"1e5a02b050d9a838d2f2dd4a96a19cf7c27103435711db7f8b988da2d2c9ad77"} Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.208335 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc7cb4589-8j695" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.220216 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"35491618-81a4-4f75-927f-6b6a3d0c9ce2","Type":"ContainerDied","Data":"d154b8c623388e15df3f460cc0bfd3cdb78da77369e206c15b943a33117492fc"} Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.220342 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.299751 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fc7cb4589-8j695"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.343170 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7fc7cb4589-8j695"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.378650 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d4f6d988c-zkbs5"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.410666 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d4f6d988c-zkbs5"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.432325 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.460935 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.484632 4817 scope.go:117] "RemoveContainer" containerID="a0617a5a58a94267c95d477de18a8d3b5d7aafac5ac99bd8db506575ae119f7c" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497130 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497622 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497642 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497654 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" containerName="init" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497661 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" containerName="init" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497677 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" containerName="dnsmasq-dns" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497684 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" containerName="dnsmasq-dns" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497699 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-notification-agent" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497705 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-notification-agent" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497714 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497721 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497736 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon-log" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497743 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon-log" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497757 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="sg-core" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497764 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="sg-core" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497776 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="proxy-httpd" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497782 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="proxy-httpd" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497798 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-central-agent" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497806 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-central-agent" Mar 14 06:25:06 crc kubenswrapper[4817]: E0314 06:25:06.497818 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon-log" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.497823 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon-log" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498092 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon-log" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498107 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-central-agent" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498126 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="ceilometer-notification-agent" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498136 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498143 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" containerName="horizon-log" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498152 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" containerName="horizon" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498160 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" containerName="dnsmasq-dns" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498173 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="sg-core" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.498182 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="proxy-httpd" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.499956 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.506243 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.506540 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.506431 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.545983 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.614091 4817 scope.go:117] "RemoveContainer" containerID="c07a4ec1532ca89b8c7ba85f33d8ea433195ac4bdc848bed55ff73050ee22f20" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670103 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-config-data\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670212 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670348 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-run-httpd\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670409 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670484 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-scripts\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670566 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-log-httpd\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670669 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.670739 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4tg\" (UniqueName: \"kubernetes.io/projected/1edeb569-e0b2-498d-b699-21daef1a28f0-kube-api-access-rc4tg\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.757434 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" path="/var/lib/kubelet/pods/35491618-81a4-4f75-927f-6b6a3d0c9ce2/volumes" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.758708 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ec49d8-d391-457a-ac99-01d35c496fa1" path="/var/lib/kubelet/pods/81ec49d8-d391-457a-ac99-01d35c496fa1/volumes" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.760201 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca487333-68c9-470e-b299-c8331d9b59b6" path="/var/lib/kubelet/pods/ca487333-68c9-470e-b299-c8331d9b59b6/volumes" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.761046 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d2f538-a74e-441c-9406-fe1d3ef5b8c7" path="/var/lib/kubelet/pods/e9d2f538-a74e-441c-9406-fe1d3ef5b8c7/volumes" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772326 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-scripts\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772402 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-log-httpd\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772457 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772492 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc4tg\" (UniqueName: \"kubernetes.io/projected/1edeb569-e0b2-498d-b699-21daef1a28f0-kube-api-access-rc4tg\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772532 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-config-data\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772573 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772599 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-run-httpd\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.772628 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.777122 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-run-httpd\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.777381 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-log-httpd\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.781859 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.784362 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.786769 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-config-data\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.787607 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-scripts\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.800409 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.810355 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc4tg\" (UniqueName: \"kubernetes.io/projected/1edeb569-e0b2-498d-b699-21daef1a28f0-kube-api-access-rc4tg\") pod \"ceilometer-0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.831035 4817 scope.go:117] "RemoveContainer" containerID="264ff85bd6df9898a46db7830d30e910c4265b43f7e2d385d664a0c50e1fc360" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.849792 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:06 crc kubenswrapper[4817]: I0314 06:25:06.890183 4817 scope.go:117] "RemoveContainer" containerID="203c4234bf50ffef3cbcd5c20d318313984a98c3213c7960188cd537798bba74" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.012179 4817 scope.go:117] "RemoveContainer" containerID="15fab14509bd8b37d8649d0d4e30717dbe50bc917695e764a7e241d22cb0cff3" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.069233 4817 scope.go:117] "RemoveContainer" containerID="4c588abde7ecaea57d824b62ede42d3b5433ccc584206a55a3b15187c891a59a" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.104444 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.106341 4817 scope.go:117] "RemoveContainer" containerID="9998e0aaa0f4549d9f30f501352b9a600999db9deb43bc866d4789909e5ef60c" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.242688 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2adc02b-dfa9-4679-9c32-e127baabf2ad","Type":"ContainerStarted","Data":"67b7a10878b9f7701be0ae9c86cc0e58dcd7007f748927a677dcec406b8da9c9"} Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.310569 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-767cf48f8d-lxbdx" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.340425 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.877800167 podStartE2EDuration="14.340397079s" podCreationTimestamp="2026-03-14 06:24:53 +0000 UTC" firstStartedPulling="2026-03-14 06:24:54.53056674 +0000 UTC m=+3148.568827486" lastFinishedPulling="2026-03-14 06:25:04.993163652 +0000 UTC m=+3159.031424398" observedRunningTime="2026-03-14 06:25:07.28081569 +0000 UTC m=+3161.319076436" watchObservedRunningTime="2026-03-14 06:25:07.340397079 +0000 UTC m=+3161.378657815" Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.406365 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.416040 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.435982 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-559cff965b-fnvkc"] Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.436246 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-559cff965b-fnvkc" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon-log" containerID="cri-o://7ddfc58cb20166fd23d72f3b715cced0d3c58321a3e5caaa45d33f6e8aacc7db" gracePeriod=30 Mar 14 06:25:07 crc kubenswrapper[4817]: I0314 06:25:07.436790 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-559cff965b-fnvkc" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" containerID="cri-o://35b3f963a770a4c10325006a3e9273407c56234a0cf6f85f6388b9bf48b6471e" gracePeriod=30 Mar 14 06:25:08 crc kubenswrapper[4817]: I0314 06:25:08.205745 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:08 crc kubenswrapper[4817]: I0314 06:25:08.272731 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerStarted","Data":"e87aa478edc2cca9f8d512c23ba586277c32899988aba3087415f4373a9edfe3"} Mar 14 06:25:09 crc kubenswrapper[4817]: I0314 06:25:09.285372 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerStarted","Data":"b614f477b73c8de4bfd28c6e0f5ee222737d9f044967043379617f496b11fdae"} Mar 14 06:25:09 crc kubenswrapper[4817]: I0314 06:25:09.285736 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerStarted","Data":"09c2f3d76c3cf015175b9d777c3ebcaa904e7c91727736a546c0db26a885fba4"} Mar 14 06:25:10 crc kubenswrapper[4817]: I0314 06:25:10.302721 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerStarted","Data":"75da5ee9e34560c54c759615eb8cb1fa9ad256cad7d43529ca74485fc8a07eb3"} Mar 14 06:25:11 crc kubenswrapper[4817]: I0314 06:25:11.315264 4817 generic.go:334] "Generic (PLEG): container finished" podID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerID="35b3f963a770a4c10325006a3e9273407c56234a0cf6f85f6388b9bf48b6471e" exitCode=0 Mar 14 06:25:11 crc kubenswrapper[4817]: I0314 06:25:11.315333 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559cff965b-fnvkc" event={"ID":"56147eb2-93e8-41a7-b6c9-4f40e07263c8","Type":"ContainerDied","Data":"35b3f963a770a4c10325006a3e9273407c56234a0cf6f85f6388b9bf48b6471e"} Mar 14 06:25:11 crc kubenswrapper[4817]: I0314 06:25:11.404255 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-559cff965b-fnvkc" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.7:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.7:8443: connect: connection refused" Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.328203 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerStarted","Data":"3adde7e4994f3b246d51200c9ea7de5426c946202a6c8026e1ac6b0672cc0c3d"} Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.328694 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.328596 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-central-agent" containerID="cri-o://09c2f3d76c3cf015175b9d777c3ebcaa904e7c91727736a546c0db26a885fba4" gracePeriod=30 Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.328853 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="sg-core" containerID="cri-o://75da5ee9e34560c54c759615eb8cb1fa9ad256cad7d43529ca74485fc8a07eb3" gracePeriod=30 Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.328935 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="proxy-httpd" containerID="cri-o://3adde7e4994f3b246d51200c9ea7de5426c946202a6c8026e1ac6b0672cc0c3d" gracePeriod=30 Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.328963 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-notification-agent" containerID="cri-o://b614f477b73c8de4bfd28c6e0f5ee222737d9f044967043379617f496b11fdae" gracePeriod=30 Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.355661 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.0434153840000002 podStartE2EDuration="6.355639309s" podCreationTimestamp="2026-03-14 06:25:06 +0000 UTC" firstStartedPulling="2026-03-14 06:25:07.415719475 +0000 UTC m=+3161.453980221" lastFinishedPulling="2026-03-14 06:25:11.7279434 +0000 UTC m=+3165.766204146" observedRunningTime="2026-03-14 06:25:12.354733363 +0000 UTC m=+3166.392994119" watchObservedRunningTime="2026-03-14 06:25:12.355639309 +0000 UTC m=+3166.393900055" Mar 14 06:25:12 crc kubenswrapper[4817]: I0314 06:25:12.732928 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:25:12 crc kubenswrapper[4817]: E0314 06:25:12.733259 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.364132 4817 generic.go:334] "Generic (PLEG): container finished" podID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerID="3adde7e4994f3b246d51200c9ea7de5426c946202a6c8026e1ac6b0672cc0c3d" exitCode=0 Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.364565 4817 generic.go:334] "Generic (PLEG): container finished" podID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerID="75da5ee9e34560c54c759615eb8cb1fa9ad256cad7d43529ca74485fc8a07eb3" exitCode=2 Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.364583 4817 generic.go:334] "Generic (PLEG): container finished" podID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerID="b614f477b73c8de4bfd28c6e0f5ee222737d9f044967043379617f496b11fdae" exitCode=0 Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.364192 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerDied","Data":"3adde7e4994f3b246d51200c9ea7de5426c946202a6c8026e1ac6b0672cc0c3d"} Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.364673 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerDied","Data":"75da5ee9e34560c54c759615eb8cb1fa9ad256cad7d43529ca74485fc8a07eb3"} Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.364729 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerDied","Data":"b614f477b73c8de4bfd28c6e0f5ee222737d9f044967043379617f496b11fdae"} Mar 14 06:25:13 crc kubenswrapper[4817]: I0314 06:25:13.625243 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.399370 4817 generic.go:334] "Generic (PLEG): container finished" podID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerID="09c2f3d76c3cf015175b9d777c3ebcaa904e7c91727736a546c0db26a885fba4" exitCode=0 Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.399485 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerDied","Data":"09c2f3d76c3cf015175b9d777c3ebcaa904e7c91727736a546c0db26a885fba4"} Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.650757 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.828861 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-scripts\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.828947 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-ceilometer-tls-certs\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.828978 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-combined-ca-bundle\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829127 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-config-data\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829155 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-run-httpd\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829232 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-log-httpd\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829278 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-sg-core-conf-yaml\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829334 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4tg\" (UniqueName: \"kubernetes.io/projected/1edeb569-e0b2-498d-b699-21daef1a28f0-kube-api-access-rc4tg\") pod \"1edeb569-e0b2-498d-b699-21daef1a28f0\" (UID: \"1edeb569-e0b2-498d-b699-21daef1a28f0\") " Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829585 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.829719 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.830249 4817 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.830280 4817 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1edeb569-e0b2-498d-b699-21daef1a28f0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.836374 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edeb569-e0b2-498d-b699-21daef1a28f0-kube-api-access-rc4tg" (OuterVolumeSpecName: "kube-api-access-rc4tg") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "kube-api-access-rc4tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.855021 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-scripts" (OuterVolumeSpecName: "scripts") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.862569 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.891631 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.932963 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.933009 4817 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.933025 4817 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.933038 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc4tg\" (UniqueName: \"kubernetes.io/projected/1edeb569-e0b2-498d-b699-21daef1a28f0-kube-api-access-rc4tg\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.945533 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:14 crc kubenswrapper[4817]: I0314 06:25:14.947269 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-config-data" (OuterVolumeSpecName: "config-data") pod "1edeb569-e0b2-498d-b699-21daef1a28f0" (UID: "1edeb569-e0b2-498d-b699-21daef1a28f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.035323 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.035366 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edeb569-e0b2-498d-b699-21daef1a28f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.122483 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.187653 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.415465 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1edeb569-e0b2-498d-b699-21daef1a28f0","Type":"ContainerDied","Data":"e87aa478edc2cca9f8d512c23ba586277c32899988aba3087415f4373a9edfe3"} Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.415916 4817 scope.go:117] "RemoveContainer" containerID="3adde7e4994f3b246d51200c9ea7de5426c946202a6c8026e1ac6b0672cc0c3d" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.415497 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.415836 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="probe" containerID="cri-o://f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065" gracePeriod=30 Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.415725 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="manila-scheduler" containerID="cri-o://903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc" gracePeriod=30 Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.455969 4817 scope.go:117] "RemoveContainer" containerID="75da5ee9e34560c54c759615eb8cb1fa9ad256cad7d43529ca74485fc8a07eb3" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.472525 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.483011 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.488478 4817 scope.go:117] "RemoveContainer" containerID="b614f477b73c8de4bfd28c6e0f5ee222737d9f044967043379617f496b11fdae" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.516713 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:15 crc kubenswrapper[4817]: E0314 06:25:15.517496 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-notification-agent" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.517547 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-notification-agent" Mar 14 06:25:15 crc kubenswrapper[4817]: E0314 06:25:15.517574 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-central-agent" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.517584 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-central-agent" Mar 14 06:25:15 crc kubenswrapper[4817]: E0314 06:25:15.517602 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="sg-core" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.517614 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="sg-core" Mar 14 06:25:15 crc kubenswrapper[4817]: E0314 06:25:15.517658 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="proxy-httpd" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.517671 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="proxy-httpd" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.518024 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-notification-agent" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.518052 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="ceilometer-central-agent" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.518069 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="proxy-httpd" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.518086 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" containerName="sg-core" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.548019 4817 scope.go:117] "RemoveContainer" containerID="09c2f3d76c3cf015175b9d777c3ebcaa904e7c91727736a546c0db26a885fba4" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.556867 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.560526 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.561748 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.561988 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.591760 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.656496 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kqr\" (UniqueName: \"kubernetes.io/projected/edf54d5a-1b48-43ca-a621-e815ccf42e59-kube-api-access-d4kqr\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.656659 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-scripts\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.656720 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.656748 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf54d5a-1b48-43ca-a621-e815ccf42e59-run-httpd\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.656959 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf54d5a-1b48-43ca-a621-e815ccf42e59-log-httpd\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.657033 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.657344 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-config-data\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.657478 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759681 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-config-data\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759766 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759883 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kqr\" (UniqueName: \"kubernetes.io/projected/edf54d5a-1b48-43ca-a621-e815ccf42e59-kube-api-access-d4kqr\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759929 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-scripts\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759947 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759961 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf54d5a-1b48-43ca-a621-e815ccf42e59-run-httpd\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.759987 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf54d5a-1b48-43ca-a621-e815ccf42e59-log-httpd\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.760012 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.761828 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf54d5a-1b48-43ca-a621-e815ccf42e59-run-httpd\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.762031 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf54d5a-1b48-43ca-a621-e815ccf42e59-log-httpd\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.765823 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-scripts\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.766402 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.767776 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.768122 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.774196 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf54d5a-1b48-43ca-a621-e815ccf42e59-config-data\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.783465 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kqr\" (UniqueName: \"kubernetes.io/projected/edf54d5a-1b48-43ca-a621-e815ccf42e59-kube-api-access-d4kqr\") pod \"ceilometer-0\" (UID: \"edf54d5a-1b48-43ca-a621-e815ccf42e59\") " pod="openstack/ceilometer-0" Mar 14 06:25:15 crc kubenswrapper[4817]: I0314 06:25:15.889210 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 06:25:16 crc kubenswrapper[4817]: W0314 06:25:16.352768 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedf54d5a_1b48_43ca_a621_e815ccf42e59.slice/crio-b3dac4290e59634beba38e212f65adfbf940797182b5c7c32e3f8557525b9d67 WatchSource:0}: Error finding container b3dac4290e59634beba38e212f65adfbf940797182b5c7c32e3f8557525b9d67: Status 404 returned error can't find the container with id b3dac4290e59634beba38e212f65adfbf940797182b5c7c32e3f8557525b9d67 Mar 14 06:25:16 crc kubenswrapper[4817]: I0314 06:25:16.357145 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 06:25:16 crc kubenswrapper[4817]: I0314 06:25:16.426671 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf54d5a-1b48-43ca-a621-e815ccf42e59","Type":"ContainerStarted","Data":"b3dac4290e59634beba38e212f65adfbf940797182b5c7c32e3f8557525b9d67"} Mar 14 06:25:16 crc kubenswrapper[4817]: I0314 06:25:16.430126 4817 generic.go:334] "Generic (PLEG): container finished" podID="a44876b2-3994-4528-90b5-98f450b7592e" containerID="f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065" exitCode=0 Mar 14 06:25:16 crc kubenswrapper[4817]: I0314 06:25:16.430201 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a44876b2-3994-4528-90b5-98f450b7592e","Type":"ContainerDied","Data":"f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065"} Mar 14 06:25:16 crc kubenswrapper[4817]: I0314 06:25:16.755198 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edeb569-e0b2-498d-b699-21daef1a28f0" path="/var/lib/kubelet/pods/1edeb569-e0b2-498d-b699-21daef1a28f0/volumes" Mar 14 06:25:17 crc kubenswrapper[4817]: I0314 06:25:17.443960 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf54d5a-1b48-43ca-a621-e815ccf42e59","Type":"ContainerStarted","Data":"b9066d55a7e48e80c07495b8c33fdaa3d9c2783f0582d4aa80c358790a43c2b5"} Mar 14 06:25:17 crc kubenswrapper[4817]: I0314 06:25:17.969762 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.117308 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data-custom\") pod \"a44876b2-3994-4528-90b5-98f450b7592e\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.117445 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctznb\" (UniqueName: \"kubernetes.io/projected/a44876b2-3994-4528-90b5-98f450b7592e-kube-api-access-ctznb\") pod \"a44876b2-3994-4528-90b5-98f450b7592e\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.117597 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-combined-ca-bundle\") pod \"a44876b2-3994-4528-90b5-98f450b7592e\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.117686 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-scripts\") pod \"a44876b2-3994-4528-90b5-98f450b7592e\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.117773 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data\") pod \"a44876b2-3994-4528-90b5-98f450b7592e\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.117873 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a44876b2-3994-4528-90b5-98f450b7592e-etc-machine-id\") pod \"a44876b2-3994-4528-90b5-98f450b7592e\" (UID: \"a44876b2-3994-4528-90b5-98f450b7592e\") " Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.118127 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a44876b2-3994-4528-90b5-98f450b7592e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a44876b2-3994-4528-90b5-98f450b7592e" (UID: "a44876b2-3994-4528-90b5-98f450b7592e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.118931 4817 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a44876b2-3994-4528-90b5-98f450b7592e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.123640 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a44876b2-3994-4528-90b5-98f450b7592e" (UID: "a44876b2-3994-4528-90b5-98f450b7592e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.123795 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-scripts" (OuterVolumeSpecName: "scripts") pod "a44876b2-3994-4528-90b5-98f450b7592e" (UID: "a44876b2-3994-4528-90b5-98f450b7592e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.124254 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44876b2-3994-4528-90b5-98f450b7592e-kube-api-access-ctznb" (OuterVolumeSpecName: "kube-api-access-ctznb") pod "a44876b2-3994-4528-90b5-98f450b7592e" (UID: "a44876b2-3994-4528-90b5-98f450b7592e"). InnerVolumeSpecName "kube-api-access-ctznb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.179377 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a44876b2-3994-4528-90b5-98f450b7592e" (UID: "a44876b2-3994-4528-90b5-98f450b7592e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.222106 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.222159 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.222177 4817 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.222194 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctznb\" (UniqueName: \"kubernetes.io/projected/a44876b2-3994-4528-90b5-98f450b7592e-kube-api-access-ctznb\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.248596 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data" (OuterVolumeSpecName: "config-data") pod "a44876b2-3994-4528-90b5-98f450b7592e" (UID: "a44876b2-3994-4528-90b5-98f450b7592e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.325092 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a44876b2-3994-4528-90b5-98f450b7592e-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.467069 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf54d5a-1b48-43ca-a621-e815ccf42e59","Type":"ContainerStarted","Data":"172af177e30abccfb994e7b720f298270f32273db7a658698b10927d7f183525"} Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.467135 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf54d5a-1b48-43ca-a621-e815ccf42e59","Type":"ContainerStarted","Data":"e9fd050c34a3df99a4204252a8e731fc837b2b8e7a4ad362133e16172b50297d"} Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.470609 4817 generic.go:334] "Generic (PLEG): container finished" podID="a44876b2-3994-4528-90b5-98f450b7592e" containerID="903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc" exitCode=0 Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.470662 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a44876b2-3994-4528-90b5-98f450b7592e","Type":"ContainerDied","Data":"903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc"} Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.470695 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a44876b2-3994-4528-90b5-98f450b7592e","Type":"ContainerDied","Data":"8609da3544159220e42fb6353e8aa1756166d108c35782636eaf17924d360918"} Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.470721 4817 scope.go:117] "RemoveContainer" containerID="f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.471000 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.549604 4817 scope.go:117] "RemoveContainer" containerID="903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.567321 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.584274 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.584516 4817 scope.go:117] "RemoveContainer" containerID="f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065" Mar 14 06:25:18 crc kubenswrapper[4817]: E0314 06:25:18.585560 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065\": container with ID starting with f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065 not found: ID does not exist" containerID="f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.585606 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065"} err="failed to get container status \"f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065\": rpc error: code = NotFound desc = could not find container \"f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065\": container with ID starting with f1e9116361b4c5e84123321241caf536ac8ae6a8120c9248845a281c83773065 not found: ID does not exist" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.585636 4817 scope.go:117] "RemoveContainer" containerID="903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc" Mar 14 06:25:18 crc kubenswrapper[4817]: E0314 06:25:18.586025 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc\": container with ID starting with 903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc not found: ID does not exist" containerID="903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.586106 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc"} err="failed to get container status \"903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc\": rpc error: code = NotFound desc = could not find container \"903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc\": container with ID starting with 903d4223eddff62fea055be4499a2f510640319ce864ee5d37da7746edcdfebc not found: ID does not exist" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.603203 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:25:18 crc kubenswrapper[4817]: E0314 06:25:18.603822 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="manila-scheduler" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.603846 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="manila-scheduler" Mar 14 06:25:18 crc kubenswrapper[4817]: E0314 06:25:18.603866 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="probe" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.603875 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="probe" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.604144 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="probe" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.604169 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44876b2-3994-4528-90b5-98f450b7592e" containerName="manila-scheduler" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.605494 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.611021 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.627229 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.739873 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-scripts\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.741072 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-config-data\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.741345 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhsv\" (UniqueName: \"kubernetes.io/projected/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-kube-api-access-vrhsv\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.741415 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.741482 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.741645 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.763725 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44876b2-3994-4528-90b5-98f450b7592e" path="/var/lib/kubelet/pods/a44876b2-3994-4528-90b5-98f450b7592e/volumes" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.844352 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-scripts\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.844493 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-config-data\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.845499 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhsv\" (UniqueName: \"kubernetes.io/projected/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-kube-api-access-vrhsv\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.845572 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.846277 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.846362 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.846799 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.850397 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-config-data\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.851526 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-scripts\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.861550 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.862250 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.865953 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhsv\" (UniqueName: \"kubernetes.io/projected/ea6cf2e0-1f09-4e7b-8e73-21363bcad511-kube-api-access-vrhsv\") pod \"manila-scheduler-0\" (UID: \"ea6cf2e0-1f09-4e7b-8e73-21363bcad511\") " pod="openstack/manila-scheduler-0" Mar 14 06:25:18 crc kubenswrapper[4817]: I0314 06:25:18.929663 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 14 06:25:19 crc kubenswrapper[4817]: W0314 06:25:19.420773 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea6cf2e0_1f09_4e7b_8e73_21363bcad511.slice/crio-c194abbf82325bc272659ba0e1130fdf03acd396228b163d5d29baa4e1c36875 WatchSource:0}: Error finding container c194abbf82325bc272659ba0e1130fdf03acd396228b163d5d29baa4e1c36875: Status 404 returned error can't find the container with id c194abbf82325bc272659ba0e1130fdf03acd396228b163d5d29baa4e1c36875 Mar 14 06:25:19 crc kubenswrapper[4817]: I0314 06:25:19.426243 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 14 06:25:19 crc kubenswrapper[4817]: I0314 06:25:19.484222 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ea6cf2e0-1f09-4e7b-8e73-21363bcad511","Type":"ContainerStarted","Data":"c194abbf82325bc272659ba0e1130fdf03acd396228b163d5d29baa4e1c36875"} Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.502050 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ea6cf2e0-1f09-4e7b-8e73-21363bcad511","Type":"ContainerStarted","Data":"f808c9fafd550ff60e13512606fe38267e0ca7a286579faf4f5795ab6dceda22"} Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.502586 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ea6cf2e0-1f09-4e7b-8e73-21363bcad511","Type":"ContainerStarted","Data":"a01e7f4423b24c0889c4c57c676398dea1f6d068f9b5ed965d30c2ac47031597"} Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.505274 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf54d5a-1b48-43ca-a621-e815ccf42e59","Type":"ContainerStarted","Data":"cf35926caeccf3f5f9bde275334a4f7e91cf9214739fba48f7e784e572af8a75"} Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.506284 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.537334 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.537312366 podStartE2EDuration="2.537312366s" podCreationTimestamp="2026-03-14 06:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:25:20.526786787 +0000 UTC m=+3174.565047533" watchObservedRunningTime="2026-03-14 06:25:20.537312366 +0000 UTC m=+3174.575573112" Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.885069 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Mar 14 06:25:20 crc kubenswrapper[4817]: I0314 06:25:20.927322 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.086823975 podStartE2EDuration="5.927293854s" podCreationTimestamp="2026-03-14 06:25:15 +0000 UTC" firstStartedPulling="2026-03-14 06:25:16.355552179 +0000 UTC m=+3170.393812925" lastFinishedPulling="2026-03-14 06:25:20.196022048 +0000 UTC m=+3174.234282804" observedRunningTime="2026-03-14 06:25:20.565065113 +0000 UTC m=+3174.603325869" watchObservedRunningTime="2026-03-14 06:25:20.927293854 +0000 UTC m=+3174.965554600" Mar 14 06:25:21 crc kubenswrapper[4817]: I0314 06:25:21.403860 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-559cff965b-fnvkc" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.7:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.7:8443: connect: connection refused" Mar 14 06:25:24 crc kubenswrapper[4817]: I0314 06:25:24.731980 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:25:24 crc kubenswrapper[4817]: E0314 06:25:24.732693 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:25:25 crc kubenswrapper[4817]: I0314 06:25:25.348722 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 14 06:25:25 crc kubenswrapper[4817]: I0314 06:25:25.460136 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:25:25 crc kubenswrapper[4817]: I0314 06:25:25.553405 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="manila-share" containerID="cri-o://377faf4ec05d73d821d0852c00eca9c7d52c9fdfdf673eb41091095d63dd070b" gracePeriod=30 Mar 14 06:25:25 crc kubenswrapper[4817]: I0314 06:25:25.553587 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="probe" containerID="cri-o://67b7a10878b9f7701be0ae9c86cc0e58dcd7007f748927a677dcec406b8da9c9" gracePeriod=30 Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.568864 4817 generic.go:334] "Generic (PLEG): container finished" podID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerID="67b7a10878b9f7701be0ae9c86cc0e58dcd7007f748927a677dcec406b8da9c9" exitCode=0 Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.569331 4817 generic.go:334] "Generic (PLEG): container finished" podID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerID="377faf4ec05d73d821d0852c00eca9c7d52c9fdfdf673eb41091095d63dd070b" exitCode=1 Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.569367 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2adc02b-dfa9-4679-9c32-e127baabf2ad","Type":"ContainerDied","Data":"67b7a10878b9f7701be0ae9c86cc0e58dcd7007f748927a677dcec406b8da9c9"} Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.569405 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2adc02b-dfa9-4679-9c32-e127baabf2ad","Type":"ContainerDied","Data":"377faf4ec05d73d821d0852c00eca9c7d52c9fdfdf673eb41091095d63dd070b"} Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.668691 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.678799 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-scripts\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.678872 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679014 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-ceph\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679182 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data-custom\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679296 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-var-lib-manila\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679347 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-etc-machine-id\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679441 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5ftr\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-kube-api-access-j5ftr\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679494 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-combined-ca-bundle\") pod \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\" (UID: \"a2adc02b-dfa9-4679-9c32-e127baabf2ad\") " Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.679977 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.680136 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.680665 4817 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-var-lib-manila\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.680687 4817 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2adc02b-dfa9-4679-9c32-e127baabf2ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.687345 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.704257 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-kube-api-access-j5ftr" (OuterVolumeSpecName: "kube-api-access-j5ftr") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "kube-api-access-j5ftr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.704691 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-scripts" (OuterVolumeSpecName: "scripts") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.704771 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-ceph" (OuterVolumeSpecName: "ceph") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.757849 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.783352 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5ftr\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-kube-api-access-j5ftr\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.783399 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.783413 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.783428 4817 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a2adc02b-dfa9-4679-9c32-e127baabf2ad-ceph\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.783440 4817 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.860049 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data" (OuterVolumeSpecName: "config-data") pod "a2adc02b-dfa9-4679-9c32-e127baabf2ad" (UID: "a2adc02b-dfa9-4679-9c32-e127baabf2ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:26 crc kubenswrapper[4817]: I0314 06:25:26.885823 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2adc02b-dfa9-4679-9c32-e127baabf2ad-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.582517 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a2adc02b-dfa9-4679-9c32-e127baabf2ad","Type":"ContainerDied","Data":"8f063a7725dc856062469ad225029ce0f5f1d860f22c47eef00d38311e37ab6c"} Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.582588 4817 scope.go:117] "RemoveContainer" containerID="67b7a10878b9f7701be0ae9c86cc0e58dcd7007f748927a677dcec406b8da9c9" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.582616 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.631876 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.634645 4817 scope.go:117] "RemoveContainer" containerID="377faf4ec05d73d821d0852c00eca9c7d52c9fdfdf673eb41091095d63dd070b" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.644750 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.681130 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:25:27 crc kubenswrapper[4817]: E0314 06:25:27.681848 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="manila-share" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.681875 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="manila-share" Mar 14 06:25:27 crc kubenswrapper[4817]: E0314 06:25:27.682013 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="probe" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.682058 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="probe" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.682400 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="probe" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.682444 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" containerName="manila-share" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.684600 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.688197 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.697885 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705050 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lxdc\" (UniqueName: \"kubernetes.io/projected/59850886-78b9-425e-895a-4a0f48438dbd-kube-api-access-4lxdc\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705319 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59850886-78b9-425e-895a-4a0f48438dbd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705364 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-scripts\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705402 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705513 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/59850886-78b9-425e-895a-4a0f48438dbd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705574 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-config-data\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705683 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/59850886-78b9-425e-895a-4a0f48438dbd-ceph\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.705726 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.806764 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.806834 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lxdc\" (UniqueName: \"kubernetes.io/projected/59850886-78b9-425e-895a-4a0f48438dbd-kube-api-access-4lxdc\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.807076 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59850886-78b9-425e-895a-4a0f48438dbd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.807103 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-scripts\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.807127 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.807160 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/59850886-78b9-425e-895a-4a0f48438dbd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.807208 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-config-data\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.807282 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/59850886-78b9-425e-895a-4a0f48438dbd-ceph\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.808041 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59850886-78b9-425e-895a-4a0f48438dbd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.808738 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/59850886-78b9-425e-895a-4a0f48438dbd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.812121 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/59850886-78b9-425e-895a-4a0f48438dbd-ceph\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.812782 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.813234 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.820233 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-scripts\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.822967 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59850886-78b9-425e-895a-4a0f48438dbd-config-data\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:27 crc kubenswrapper[4817]: I0314 06:25:27.843023 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lxdc\" (UniqueName: \"kubernetes.io/projected/59850886-78b9-425e-895a-4a0f48438dbd-kube-api-access-4lxdc\") pod \"manila-share-share1-0\" (UID: \"59850886-78b9-425e-895a-4a0f48438dbd\") " pod="openstack/manila-share-share1-0" Mar 14 06:25:28 crc kubenswrapper[4817]: I0314 06:25:28.014626 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 14 06:25:28 crc kubenswrapper[4817]: I0314 06:25:28.605650 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 14 06:25:28 crc kubenswrapper[4817]: W0314 06:25:28.612403 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59850886_78b9_425e_895a_4a0f48438dbd.slice/crio-2fe3ac2dbf14b1bac6c069012f36d05590f0b0d719504610958d81bbdc042c40 WatchSource:0}: Error finding container 2fe3ac2dbf14b1bac6c069012f36d05590f0b0d719504610958d81bbdc042c40: Status 404 returned error can't find the container with id 2fe3ac2dbf14b1bac6c069012f36d05590f0b0d719504610958d81bbdc042c40 Mar 14 06:25:28 crc kubenswrapper[4817]: I0314 06:25:28.752027 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2adc02b-dfa9-4679-9c32-e127baabf2ad" path="/var/lib/kubelet/pods/a2adc02b-dfa9-4679-9c32-e127baabf2ad/volumes" Mar 14 06:25:28 crc kubenswrapper[4817]: I0314 06:25:28.930075 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 14 06:25:29 crc kubenswrapper[4817]: I0314 06:25:29.623001 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"59850886-78b9-425e-895a-4a0f48438dbd","Type":"ContainerStarted","Data":"271d845d257bc396233f507b0c5882fd9bc3596b10d9679baee099c2c8579d7e"} Mar 14 06:25:29 crc kubenswrapper[4817]: I0314 06:25:29.623460 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"59850886-78b9-425e-895a-4a0f48438dbd","Type":"ContainerStarted","Data":"2fe3ac2dbf14b1bac6c069012f36d05590f0b0d719504610958d81bbdc042c40"} Mar 14 06:25:30 crc kubenswrapper[4817]: I0314 06:25:30.636645 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"59850886-78b9-425e-895a-4a0f48438dbd","Type":"ContainerStarted","Data":"1b58338727dd6c6a024dac1bbd49417a74f31c46fa7681ebd8bd50996d6d86f1"} Mar 14 06:25:31 crc kubenswrapper[4817]: I0314 06:25:31.403309 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-559cff965b-fnvkc" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.7:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.7:8443: connect: connection refused" Mar 14 06:25:31 crc kubenswrapper[4817]: I0314 06:25:31.403818 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:25:31 crc kubenswrapper[4817]: I0314 06:25:31.441268 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.441239443 podStartE2EDuration="4.441239443s" podCreationTimestamp="2026-03-14 06:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:25:29.686413994 +0000 UTC m=+3183.724674740" watchObservedRunningTime="2026-03-14 06:25:31.441239443 +0000 UTC m=+3185.479500179" Mar 14 06:25:35 crc kubenswrapper[4817]: I0314 06:25:35.282290 4817 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="35491618-81a4-4f75-927f-6b6a3d0c9ce2" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.190:3000/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.723312 4817 generic.go:334] "Generic (PLEG): container finished" podID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerID="7ddfc58cb20166fd23d72f3b715cced0d3c58321a3e5caaa45d33f6e8aacc7db" exitCode=137 Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.723992 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559cff965b-fnvkc" event={"ID":"56147eb2-93e8-41a7-b6c9-4f40e07263c8","Type":"ContainerDied","Data":"7ddfc58cb20166fd23d72f3b715cced0d3c58321a3e5caaa45d33f6e8aacc7db"} Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.880586 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.956195 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-combined-ca-bundle\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.956877 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-secret-key\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.956929 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-scripts\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.957059 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-tls-certs\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.957139 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56147eb2-93e8-41a7-b6c9-4f40e07263c8-logs\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.957174 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfmpp\" (UniqueName: \"kubernetes.io/projected/56147eb2-93e8-41a7-b6c9-4f40e07263c8-kube-api-access-rfmpp\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.957312 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-config-data\") pod \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\" (UID: \"56147eb2-93e8-41a7-b6c9-4f40e07263c8\") " Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.957770 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56147eb2-93e8-41a7-b6c9-4f40e07263c8-logs" (OuterVolumeSpecName: "logs") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.958381 4817 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56147eb2-93e8-41a7-b6c9-4f40e07263c8-logs\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.963395 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.968147 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56147eb2-93e8-41a7-b6c9-4f40e07263c8-kube-api-access-rfmpp" (OuterVolumeSpecName: "kube-api-access-rfmpp") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "kube-api-access-rfmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.981837 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-scripts" (OuterVolumeSpecName: "scripts") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.987311 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-config-data" (OuterVolumeSpecName: "config-data") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:25:37 crc kubenswrapper[4817]: I0314 06:25:37.993401 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.015335 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.017982 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "56147eb2-93e8-41a7-b6c9-4f40e07263c8" (UID: "56147eb2-93e8-41a7-b6c9-4f40e07263c8"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.060559 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.060603 4817 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.060612 4817 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.060621 4817 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56147eb2-93e8-41a7-b6c9-4f40e07263c8-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.060629 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfmpp\" (UniqueName: \"kubernetes.io/projected/56147eb2-93e8-41a7-b6c9-4f40e07263c8-kube-api-access-rfmpp\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.060639 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56147eb2-93e8-41a7-b6c9-4f40e07263c8-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.739958 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559cff965b-fnvkc" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.758605 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559cff965b-fnvkc" event={"ID":"56147eb2-93e8-41a7-b6c9-4f40e07263c8","Type":"ContainerDied","Data":"5e574b93915140f4ee944e6dd0d0f427a2f8bf7c7a2593ce4c7707602be0d224"} Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.758688 4817 scope.go:117] "RemoveContainer" containerID="35b3f963a770a4c10325006a3e9273407c56234a0cf6f85f6388b9bf48b6471e" Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.788171 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-559cff965b-fnvkc"] Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.798104 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-559cff965b-fnvkc"] Mar 14 06:25:38 crc kubenswrapper[4817]: I0314 06:25:38.979463 4817 scope.go:117] "RemoveContainer" containerID="7ddfc58cb20166fd23d72f3b715cced0d3c58321a3e5caaa45d33f6e8aacc7db" Mar 14 06:25:39 crc kubenswrapper[4817]: I0314 06:25:39.732826 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:25:39 crc kubenswrapper[4817]: E0314 06:25:39.733182 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:25:40 crc kubenswrapper[4817]: I0314 06:25:40.744032 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" path="/var/lib/kubelet/pods/56147eb2-93e8-41a7-b6c9-4f40e07263c8/volumes" Mar 14 06:25:41 crc kubenswrapper[4817]: I0314 06:25:41.444116 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 14 06:25:45 crc kubenswrapper[4817]: I0314 06:25:45.899439 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 06:25:49 crc kubenswrapper[4817]: I0314 06:25:49.667341 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 14 06:25:54 crc kubenswrapper[4817]: I0314 06:25:54.732563 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:25:54 crc kubenswrapper[4817]: E0314 06:25:54.733665 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.208729 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557826-m659g"] Mar 14 06:26:00 crc kubenswrapper[4817]: E0314 06:26:00.209987 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon-log" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.210008 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon-log" Mar 14 06:26:00 crc kubenswrapper[4817]: E0314 06:26:00.210041 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.210049 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.210319 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.210347 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="56147eb2-93e8-41a7-b6c9-4f40e07263c8" containerName="horizon-log" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.211365 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.213666 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.217758 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.218067 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.223856 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557826-m659g"] Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.306952 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6hk\" (UniqueName: \"kubernetes.io/projected/fd80be70-77d4-4e83-9348-ef11ecbc9a52-kube-api-access-9w6hk\") pod \"auto-csr-approver-29557826-m659g\" (UID: \"fd80be70-77d4-4e83-9348-ef11ecbc9a52\") " pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.409285 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6hk\" (UniqueName: \"kubernetes.io/projected/fd80be70-77d4-4e83-9348-ef11ecbc9a52-kube-api-access-9w6hk\") pod \"auto-csr-approver-29557826-m659g\" (UID: \"fd80be70-77d4-4e83-9348-ef11ecbc9a52\") " pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.447028 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6hk\" (UniqueName: \"kubernetes.io/projected/fd80be70-77d4-4e83-9348-ef11ecbc9a52-kube-api-access-9w6hk\") pod \"auto-csr-approver-29557826-m659g\" (UID: \"fd80be70-77d4-4e83-9348-ef11ecbc9a52\") " pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:00 crc kubenswrapper[4817]: I0314 06:26:00.547712 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:01 crc kubenswrapper[4817]: I0314 06:26:01.044392 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557826-m659g"] Mar 14 06:26:01 crc kubenswrapper[4817]: I0314 06:26:01.971743 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557826-m659g" event={"ID":"fd80be70-77d4-4e83-9348-ef11ecbc9a52","Type":"ContainerStarted","Data":"f5210058ea650a16d93660838067062ac6e0c614f99fb32f3b1dd2d12dbee4d6"} Mar 14 06:26:02 crc kubenswrapper[4817]: I0314 06:26:02.984110 4817 generic.go:334] "Generic (PLEG): container finished" podID="fd80be70-77d4-4e83-9348-ef11ecbc9a52" containerID="7cc72739a245b3ab369c36b2b4d4d39b8d1ee52e335cb6a0f86641dcbb664121" exitCode=0 Mar 14 06:26:02 crc kubenswrapper[4817]: I0314 06:26:02.984202 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557826-m659g" event={"ID":"fd80be70-77d4-4e83-9348-ef11ecbc9a52","Type":"ContainerDied","Data":"7cc72739a245b3ab369c36b2b4d4d39b8d1ee52e335cb6a0f86641dcbb664121"} Mar 14 06:26:04 crc kubenswrapper[4817]: I0314 06:26:04.424785 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:04 crc kubenswrapper[4817]: I0314 06:26:04.508714 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w6hk\" (UniqueName: \"kubernetes.io/projected/fd80be70-77d4-4e83-9348-ef11ecbc9a52-kube-api-access-9w6hk\") pod \"fd80be70-77d4-4e83-9348-ef11ecbc9a52\" (UID: \"fd80be70-77d4-4e83-9348-ef11ecbc9a52\") " Mar 14 06:26:04 crc kubenswrapper[4817]: I0314 06:26:04.514583 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd80be70-77d4-4e83-9348-ef11ecbc9a52-kube-api-access-9w6hk" (OuterVolumeSpecName: "kube-api-access-9w6hk") pod "fd80be70-77d4-4e83-9348-ef11ecbc9a52" (UID: "fd80be70-77d4-4e83-9348-ef11ecbc9a52"). InnerVolumeSpecName "kube-api-access-9w6hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:26:04 crc kubenswrapper[4817]: I0314 06:26:04.613447 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w6hk\" (UniqueName: \"kubernetes.io/projected/fd80be70-77d4-4e83-9348-ef11ecbc9a52-kube-api-access-9w6hk\") on node \"crc\" DevicePath \"\"" Mar 14 06:26:05 crc kubenswrapper[4817]: I0314 06:26:05.009143 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557826-m659g" event={"ID":"fd80be70-77d4-4e83-9348-ef11ecbc9a52","Type":"ContainerDied","Data":"f5210058ea650a16d93660838067062ac6e0c614f99fb32f3b1dd2d12dbee4d6"} Mar 14 06:26:05 crc kubenswrapper[4817]: I0314 06:26:05.009554 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5210058ea650a16d93660838067062ac6e0c614f99fb32f3b1dd2d12dbee4d6" Mar 14 06:26:05 crc kubenswrapper[4817]: I0314 06:26:05.009622 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557826-m659g" Mar 14 06:26:05 crc kubenswrapper[4817]: I0314 06:26:05.511707 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557820-v496p"] Mar 14 06:26:05 crc kubenswrapper[4817]: I0314 06:26:05.523185 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557820-v496p"] Mar 14 06:26:05 crc kubenswrapper[4817]: I0314 06:26:05.731941 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:26:05 crc kubenswrapper[4817]: E0314 06:26:05.732324 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:26:06 crc kubenswrapper[4817]: I0314 06:26:06.747874 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc6ec74-28c9-4db4-bb37-39938658d717" path="/var/lib/kubelet/pods/adc6ec74-28c9-4db4-bb37-39938658d717/volumes" Mar 14 06:26:20 crc kubenswrapper[4817]: I0314 06:26:20.733229 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:26:20 crc kubenswrapper[4817]: E0314 06:26:20.734533 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:26:35 crc kubenswrapper[4817]: I0314 06:26:35.732629 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:26:35 crc kubenswrapper[4817]: E0314 06:26:35.733597 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.246462 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 14 06:26:38 crc kubenswrapper[4817]: E0314 06:26:38.247237 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd80be70-77d4-4e83-9348-ef11ecbc9a52" containerName="oc" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.247253 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd80be70-77d4-4e83-9348-ef11ecbc9a52" containerName="oc" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.247464 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd80be70-77d4-4e83-9348-ef11ecbc9a52" containerName="oc" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.248490 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.252098 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dvggb" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.253303 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.253620 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.256438 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.274862 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.395656 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.395737 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.395871 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.395940 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.396008 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6c2s\" (UniqueName: \"kubernetes.io/projected/106079f9-3258-4c46-8ef4-1811c407fc69-kube-api-access-p6c2s\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.396082 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.396298 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-config-data\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.396436 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.396508 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498304 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-config-data\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498383 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498411 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498456 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498484 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498545 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498579 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498631 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6c2s\" (UniqueName: \"kubernetes.io/projected/106079f9-3258-4c46-8ef4-1811c407fc69-kube-api-access-p6c2s\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.498668 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.499205 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.500574 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.500782 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.501363 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.502942 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-config-data\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.512767 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.513105 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.513123 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.517553 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6c2s\" (UniqueName: \"kubernetes.io/projected/106079f9-3258-4c46-8ef4-1811c407fc69-kube-api-access-p6c2s\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.554098 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " pod="openstack/tempest-tests-tempest" Mar 14 06:26:38 crc kubenswrapper[4817]: I0314 06:26:38.590800 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 14 06:26:39 crc kubenswrapper[4817]: I0314 06:26:39.094368 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 14 06:26:39 crc kubenswrapper[4817]: I0314 06:26:39.406303 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"106079f9-3258-4c46-8ef4-1811c407fc69","Type":"ContainerStarted","Data":"5d181735afd4cdcde3c880668d7d68ded31bdccc2eeec44eff4953db5040a8f5"} Mar 14 06:26:46 crc kubenswrapper[4817]: I0314 06:26:46.020415 4817 scope.go:117] "RemoveContainer" containerID="353dc8d0033012041145f11991d43acc004ebc42960283d8b01725ba2b903825" Mar 14 06:26:49 crc kubenswrapper[4817]: I0314 06:26:49.733624 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:26:49 crc kubenswrapper[4817]: E0314 06:26:49.734770 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:27:00 crc kubenswrapper[4817]: I0314 06:27:00.733039 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:27:00 crc kubenswrapper[4817]: E0314 06:27:00.734201 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:27:11 crc kubenswrapper[4817]: E0314 06:27:11.423661 4817 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 14 06:27:11 crc kubenswrapper[4817]: E0314 06:27:11.424812 4817 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6c2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(106079f9-3258-4c46-8ef4-1811c407fc69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 06:27:11 crc kubenswrapper[4817]: E0314 06:27:11.427384 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="106079f9-3258-4c46-8ef4-1811c407fc69" Mar 14 06:27:11 crc kubenswrapper[4817]: E0314 06:27:11.739147 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="106079f9-3258-4c46-8ef4-1811c407fc69" Mar 14 06:27:15 crc kubenswrapper[4817]: I0314 06:27:15.732116 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:27:15 crc kubenswrapper[4817]: E0314 06:27:15.732960 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:27:27 crc kubenswrapper[4817]: I0314 06:27:27.934577 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"106079f9-3258-4c46-8ef4-1811c407fc69","Type":"ContainerStarted","Data":"3201285d70c77eb8608027fc6155f585afa22132c28d6742216f8c0cc5526ed6"} Mar 14 06:27:27 crc kubenswrapper[4817]: I0314 06:27:27.955940 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.81340937 podStartE2EDuration="50.955887023s" podCreationTimestamp="2026-03-14 06:26:37 +0000 UTC" firstStartedPulling="2026-03-14 06:26:39.102675422 +0000 UTC m=+3253.140936168" lastFinishedPulling="2026-03-14 06:27:26.245153075 +0000 UTC m=+3300.283413821" observedRunningTime="2026-03-14 06:27:27.951639613 +0000 UTC m=+3301.989900389" watchObservedRunningTime="2026-03-14 06:27:27.955887023 +0000 UTC m=+3301.994147769" Mar 14 06:27:29 crc kubenswrapper[4817]: I0314 06:27:29.732201 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:27:29 crc kubenswrapper[4817]: E0314 06:27:29.732775 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:27:44 crc kubenswrapper[4817]: I0314 06:27:44.732575 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:27:44 crc kubenswrapper[4817]: E0314 06:27:44.733403 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:27:56 crc kubenswrapper[4817]: I0314 06:27:56.758479 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:27:56 crc kubenswrapper[4817]: E0314 06:27:56.759950 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.164511 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557828-kp9sx"] Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.166679 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.169429 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.170214 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.170918 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.180318 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557828-kp9sx"] Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.197536 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzgg\" (UniqueName: \"kubernetes.io/projected/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4-kube-api-access-pkzgg\") pod \"auto-csr-approver-29557828-kp9sx\" (UID: \"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4\") " pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.299612 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzgg\" (UniqueName: \"kubernetes.io/projected/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4-kube-api-access-pkzgg\") pod \"auto-csr-approver-29557828-kp9sx\" (UID: \"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4\") " pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.325440 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzgg\" (UniqueName: \"kubernetes.io/projected/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4-kube-api-access-pkzgg\") pod \"auto-csr-approver-29557828-kp9sx\" (UID: \"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4\") " pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:00 crc kubenswrapper[4817]: I0314 06:28:00.511592 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:01 crc kubenswrapper[4817]: I0314 06:28:01.077511 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557828-kp9sx"] Mar 14 06:28:01 crc kubenswrapper[4817]: I0314 06:28:01.327580 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" event={"ID":"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4","Type":"ContainerStarted","Data":"2ea279ca6a3440126dc0d770a75050de2fc6fece8770be8272040e1f749c23a5"} Mar 14 06:28:02 crc kubenswrapper[4817]: I0314 06:28:02.340563 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" event={"ID":"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4","Type":"ContainerStarted","Data":"d77df3d0558931ca3fd400c6782cd0b66dc01d92b88c6647e8771ccd3b427e10"} Mar 14 06:28:02 crc kubenswrapper[4817]: I0314 06:28:02.355827 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" podStartSLOduration=1.506347081 podStartE2EDuration="2.355802616s" podCreationTimestamp="2026-03-14 06:28:00 +0000 UTC" firstStartedPulling="2026-03-14 06:28:01.080606535 +0000 UTC m=+3335.118867281" lastFinishedPulling="2026-03-14 06:28:01.93006207 +0000 UTC m=+3335.968322816" observedRunningTime="2026-03-14 06:28:02.353382297 +0000 UTC m=+3336.391643033" watchObservedRunningTime="2026-03-14 06:28:02.355802616 +0000 UTC m=+3336.394063362" Mar 14 06:28:03 crc kubenswrapper[4817]: I0314 06:28:03.353265 4817 generic.go:334] "Generic (PLEG): container finished" podID="8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4" containerID="d77df3d0558931ca3fd400c6782cd0b66dc01d92b88c6647e8771ccd3b427e10" exitCode=0 Mar 14 06:28:03 crc kubenswrapper[4817]: I0314 06:28:03.353370 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" event={"ID":"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4","Type":"ContainerDied","Data":"d77df3d0558931ca3fd400c6782cd0b66dc01d92b88c6647e8771ccd3b427e10"} Mar 14 06:28:04 crc kubenswrapper[4817]: I0314 06:28:04.768926 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:04 crc kubenswrapper[4817]: I0314 06:28:04.901705 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkzgg\" (UniqueName: \"kubernetes.io/projected/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4-kube-api-access-pkzgg\") pod \"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4\" (UID: \"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4\") " Mar 14 06:28:04 crc kubenswrapper[4817]: I0314 06:28:04.911805 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4-kube-api-access-pkzgg" (OuterVolumeSpecName: "kube-api-access-pkzgg") pod "8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4" (UID: "8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4"). InnerVolumeSpecName "kube-api-access-pkzgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:28:05 crc kubenswrapper[4817]: I0314 06:28:05.006595 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkzgg\" (UniqueName: \"kubernetes.io/projected/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4-kube-api-access-pkzgg\") on node \"crc\" DevicePath \"\"" Mar 14 06:28:05 crc kubenswrapper[4817]: I0314 06:28:05.393050 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" event={"ID":"8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4","Type":"ContainerDied","Data":"2ea279ca6a3440126dc0d770a75050de2fc6fece8770be8272040e1f749c23a5"} Mar 14 06:28:05 crc kubenswrapper[4817]: I0314 06:28:05.393439 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea279ca6a3440126dc0d770a75050de2fc6fece8770be8272040e1f749c23a5" Mar 14 06:28:05 crc kubenswrapper[4817]: I0314 06:28:05.393137 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557828-kp9sx" Mar 14 06:28:05 crc kubenswrapper[4817]: I0314 06:28:05.470453 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557822-lhpn7"] Mar 14 06:28:05 crc kubenswrapper[4817]: I0314 06:28:05.479866 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557822-lhpn7"] Mar 14 06:28:06 crc kubenswrapper[4817]: I0314 06:28:06.747077 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578c4c2d-5eb1-4d88-947d-155d05b24cec" path="/var/lib/kubelet/pods/578c4c2d-5eb1-4d88-947d-155d05b24cec/volumes" Mar 14 06:28:07 crc kubenswrapper[4817]: I0314 06:28:07.732849 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:28:07 crc kubenswrapper[4817]: E0314 06:28:07.734468 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:28:19 crc kubenswrapper[4817]: I0314 06:28:19.733444 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:28:19 crc kubenswrapper[4817]: E0314 06:28:19.734543 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:28:32 crc kubenswrapper[4817]: I0314 06:28:32.737944 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:28:32 crc kubenswrapper[4817]: E0314 06:28:32.739714 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:28:46 crc kubenswrapper[4817]: I0314 06:28:46.174013 4817 scope.go:117] "RemoveContainer" containerID="971a84a12e9c0ab4f5585254f513d5a3b4c8ada50bcb4caefa3157e73e2b6917" Mar 14 06:28:46 crc kubenswrapper[4817]: I0314 06:28:46.742801 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:28:47 crc kubenswrapper[4817]: I0314 06:28:47.957374 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"d569fb6c3ed877b8474879dfe81b9cdc9ebaabb0fcc1493502863fb34518ab89"} Mar 14 06:29:22 crc kubenswrapper[4817]: I0314 06:29:22.950636 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d5mql"] Mar 14 06:29:22 crc kubenswrapper[4817]: E0314 06:29:22.955794 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4" containerName="oc" Mar 14 06:29:22 crc kubenswrapper[4817]: I0314 06:29:22.955815 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4" containerName="oc" Mar 14 06:29:22 crc kubenswrapper[4817]: I0314 06:29:22.956063 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4" containerName="oc" Mar 14 06:29:22 crc kubenswrapper[4817]: I0314 06:29:22.957619 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:22 crc kubenswrapper[4817]: I0314 06:29:22.967703 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5mql"] Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.030179 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-utilities\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.030301 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-catalog-content\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.030358 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbm8g\" (UniqueName: \"kubernetes.io/projected/0eab8c13-201e-490c-8ef7-d75f341933c3-kube-api-access-lbm8g\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.133691 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-catalog-content\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.133825 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbm8g\" (UniqueName: \"kubernetes.io/projected/0eab8c13-201e-490c-8ef7-d75f341933c3-kube-api-access-lbm8g\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.133974 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-utilities\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.134380 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-catalog-content\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.134468 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-utilities\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.156319 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbm8g\" (UniqueName: \"kubernetes.io/projected/0eab8c13-201e-490c-8ef7-d75f341933c3-kube-api-access-lbm8g\") pod \"redhat-operators-d5mql\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.287434 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:23 crc kubenswrapper[4817]: I0314 06:29:23.818413 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5mql"] Mar 14 06:29:24 crc kubenswrapper[4817]: I0314 06:29:24.383809 4817 generic.go:334] "Generic (PLEG): container finished" podID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerID="c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91" exitCode=0 Mar 14 06:29:24 crc kubenswrapper[4817]: I0314 06:29:24.384175 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerDied","Data":"c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91"} Mar 14 06:29:24 crc kubenswrapper[4817]: I0314 06:29:24.384206 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerStarted","Data":"c8dba189919288961853a66dda86dfba9bc789cc7d4f0ea59f599901792941cb"} Mar 14 06:29:25 crc kubenswrapper[4817]: I0314 06:29:25.402638 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerStarted","Data":"8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e"} Mar 14 06:29:28 crc kubenswrapper[4817]: I0314 06:29:28.438741 4817 generic.go:334] "Generic (PLEG): container finished" podID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerID="8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e" exitCode=0 Mar 14 06:29:28 crc kubenswrapper[4817]: I0314 06:29:28.438820 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerDied","Data":"8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e"} Mar 14 06:29:29 crc kubenswrapper[4817]: I0314 06:29:29.454394 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerStarted","Data":"f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b"} Mar 14 06:29:29 crc kubenswrapper[4817]: I0314 06:29:29.472758 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d5mql" podStartSLOduration=2.921654807 podStartE2EDuration="7.472737535s" podCreationTimestamp="2026-03-14 06:29:22 +0000 UTC" firstStartedPulling="2026-03-14 06:29:24.386156936 +0000 UTC m=+3418.424417682" lastFinishedPulling="2026-03-14 06:29:28.937239664 +0000 UTC m=+3422.975500410" observedRunningTime="2026-03-14 06:29:29.471994773 +0000 UTC m=+3423.510255529" watchObservedRunningTime="2026-03-14 06:29:29.472737535 +0000 UTC m=+3423.510998281" Mar 14 06:29:33 crc kubenswrapper[4817]: I0314 06:29:33.288416 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:33 crc kubenswrapper[4817]: I0314 06:29:33.289092 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:34 crc kubenswrapper[4817]: I0314 06:29:34.360781 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d5mql" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="registry-server" probeResult="failure" output=< Mar 14 06:29:34 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 06:29:34 crc kubenswrapper[4817]: > Mar 14 06:29:43 crc kubenswrapper[4817]: I0314 06:29:43.347532 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:43 crc kubenswrapper[4817]: I0314 06:29:43.421193 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:44 crc kubenswrapper[4817]: I0314 06:29:44.073027 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5mql"] Mar 14 06:29:44 crc kubenswrapper[4817]: I0314 06:29:44.631234 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d5mql" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="registry-server" containerID="cri-o://f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b" gracePeriod=2 Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.190573 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.346074 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-utilities\") pod \"0eab8c13-201e-490c-8ef7-d75f341933c3\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.346465 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbm8g\" (UniqueName: \"kubernetes.io/projected/0eab8c13-201e-490c-8ef7-d75f341933c3-kube-api-access-lbm8g\") pod \"0eab8c13-201e-490c-8ef7-d75f341933c3\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.346540 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-catalog-content\") pod \"0eab8c13-201e-490c-8ef7-d75f341933c3\" (UID: \"0eab8c13-201e-490c-8ef7-d75f341933c3\") " Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.350826 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-utilities" (OuterVolumeSpecName: "utilities") pod "0eab8c13-201e-490c-8ef7-d75f341933c3" (UID: "0eab8c13-201e-490c-8ef7-d75f341933c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.377229 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eab8c13-201e-490c-8ef7-d75f341933c3-kube-api-access-lbm8g" (OuterVolumeSpecName: "kube-api-access-lbm8g") pod "0eab8c13-201e-490c-8ef7-d75f341933c3" (UID: "0eab8c13-201e-490c-8ef7-d75f341933c3"). InnerVolumeSpecName "kube-api-access-lbm8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.450429 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.450467 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbm8g\" (UniqueName: \"kubernetes.io/projected/0eab8c13-201e-490c-8ef7-d75f341933c3-kube-api-access-lbm8g\") on node \"crc\" DevicePath \"\"" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.493011 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eab8c13-201e-490c-8ef7-d75f341933c3" (UID: "0eab8c13-201e-490c-8ef7-d75f341933c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.551911 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eab8c13-201e-490c-8ef7-d75f341933c3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.643414 4817 generic.go:334] "Generic (PLEG): container finished" podID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerID="f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b" exitCode=0 Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.643466 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerDied","Data":"f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b"} Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.643473 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5mql" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.643505 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5mql" event={"ID":"0eab8c13-201e-490c-8ef7-d75f341933c3","Type":"ContainerDied","Data":"c8dba189919288961853a66dda86dfba9bc789cc7d4f0ea59f599901792941cb"} Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.643523 4817 scope.go:117] "RemoveContainer" containerID="f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.684816 4817 scope.go:117] "RemoveContainer" containerID="8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.689508 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5mql"] Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.700324 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d5mql"] Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.729680 4817 scope.go:117] "RemoveContainer" containerID="c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.770485 4817 scope.go:117] "RemoveContainer" containerID="f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b" Mar 14 06:29:45 crc kubenswrapper[4817]: E0314 06:29:45.771692 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b\": container with ID starting with f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b not found: ID does not exist" containerID="f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.771751 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b"} err="failed to get container status \"f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b\": rpc error: code = NotFound desc = could not find container \"f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b\": container with ID starting with f9fd3e27bcda4f26f6d6ee6fad40325e01560e32b7d4ff7fd39071aa0616752b not found: ID does not exist" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.771787 4817 scope.go:117] "RemoveContainer" containerID="8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e" Mar 14 06:29:45 crc kubenswrapper[4817]: E0314 06:29:45.772446 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e\": container with ID starting with 8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e not found: ID does not exist" containerID="8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.772512 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e"} err="failed to get container status \"8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e\": rpc error: code = NotFound desc = could not find container \"8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e\": container with ID starting with 8ddf331a71a00b8c291b0f9883885a9b89d5faed805e112dbb5f431a7bcddd9e not found: ID does not exist" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.772563 4817 scope.go:117] "RemoveContainer" containerID="c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91" Mar 14 06:29:45 crc kubenswrapper[4817]: E0314 06:29:45.773124 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91\": container with ID starting with c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91 not found: ID does not exist" containerID="c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91" Mar 14 06:29:45 crc kubenswrapper[4817]: I0314 06:29:45.773173 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91"} err="failed to get container status \"c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91\": rpc error: code = NotFound desc = could not find container \"c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91\": container with ID starting with c6172110a7c3549eae45eb60c6db976344589273342547d53a34161695161a91 not found: ID does not exist" Mar 14 06:29:46 crc kubenswrapper[4817]: I0314 06:29:46.744979 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" path="/var/lib/kubelet/pods/0eab8c13-201e-490c-8ef7-d75f341933c3/volumes" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.157495 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557830-ndr9n"] Mar 14 06:30:00 crc kubenswrapper[4817]: E0314 06:30:00.158610 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="registry-server" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.158627 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="registry-server" Mar 14 06:30:00 crc kubenswrapper[4817]: E0314 06:30:00.158656 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="extract-content" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.158662 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="extract-content" Mar 14 06:30:00 crc kubenswrapper[4817]: E0314 06:30:00.158686 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="extract-utilities" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.158693 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="extract-utilities" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.158909 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eab8c13-201e-490c-8ef7-d75f341933c3" containerName="registry-server" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.159633 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.163582 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.169805 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz"] Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.170736 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.170840 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.171700 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.178712 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.179095 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.236223 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557830-ndr9n"] Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.264749 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz"] Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.303237 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d52712-bfe2-4385-9f97-3d80f0ad0d44-config-volume\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.303668 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zbxt\" (UniqueName: \"kubernetes.io/projected/34948651-9926-42ca-bdd6-d227b7e7797c-kube-api-access-2zbxt\") pod \"auto-csr-approver-29557830-ndr9n\" (UID: \"34948651-9926-42ca-bdd6-d227b7e7797c\") " pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.303794 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d52712-bfe2-4385-9f97-3d80f0ad0d44-secret-volume\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.304082 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rsb\" (UniqueName: \"kubernetes.io/projected/94d52712-bfe2-4385-9f97-3d80f0ad0d44-kube-api-access-97rsb\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.406231 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d52712-bfe2-4385-9f97-3d80f0ad0d44-config-volume\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.406310 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zbxt\" (UniqueName: \"kubernetes.io/projected/34948651-9926-42ca-bdd6-d227b7e7797c-kube-api-access-2zbxt\") pod \"auto-csr-approver-29557830-ndr9n\" (UID: \"34948651-9926-42ca-bdd6-d227b7e7797c\") " pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.406374 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d52712-bfe2-4385-9f97-3d80f0ad0d44-secret-volume\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.406492 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97rsb\" (UniqueName: \"kubernetes.io/projected/94d52712-bfe2-4385-9f97-3d80f0ad0d44-kube-api-access-97rsb\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.407665 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d52712-bfe2-4385-9f97-3d80f0ad0d44-config-volume\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.414774 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d52712-bfe2-4385-9f97-3d80f0ad0d44-secret-volume\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.423772 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zbxt\" (UniqueName: \"kubernetes.io/projected/34948651-9926-42ca-bdd6-d227b7e7797c-kube-api-access-2zbxt\") pod \"auto-csr-approver-29557830-ndr9n\" (UID: \"34948651-9926-42ca-bdd6-d227b7e7797c\") " pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.426239 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97rsb\" (UniqueName: \"kubernetes.io/projected/94d52712-bfe2-4385-9f97-3d80f0ad0d44-kube-api-access-97rsb\") pod \"collect-profiles-29557830-6bsvz\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.537440 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:00 crc kubenswrapper[4817]: I0314 06:30:00.549616 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:01 crc kubenswrapper[4817]: I0314 06:30:01.070373 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557830-ndr9n"] Mar 14 06:30:01 crc kubenswrapper[4817]: I0314 06:30:01.185306 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz"] Mar 14 06:30:01 crc kubenswrapper[4817]: W0314 06:30:01.186822 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d52712_bfe2_4385_9f97_3d80f0ad0d44.slice/crio-1f6b22e1821ab36d1fb88c4811cefd5c85c205bf9d2c26a811bf487752a3889f WatchSource:0}: Error finding container 1f6b22e1821ab36d1fb88c4811cefd5c85c205bf9d2c26a811bf487752a3889f: Status 404 returned error can't find the container with id 1f6b22e1821ab36d1fb88c4811cefd5c85c205bf9d2c26a811bf487752a3889f Mar 14 06:30:01 crc kubenswrapper[4817]: I0314 06:30:01.799528 4817 generic.go:334] "Generic (PLEG): container finished" podID="94d52712-bfe2-4385-9f97-3d80f0ad0d44" containerID="ec7dd963eb179d0e44e38c8107b4ef36afcb02a73a8d73eceba8a542e27e1172" exitCode=0 Mar 14 06:30:01 crc kubenswrapper[4817]: I0314 06:30:01.799611 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" event={"ID":"94d52712-bfe2-4385-9f97-3d80f0ad0d44","Type":"ContainerDied","Data":"ec7dd963eb179d0e44e38c8107b4ef36afcb02a73a8d73eceba8a542e27e1172"} Mar 14 06:30:01 crc kubenswrapper[4817]: I0314 06:30:01.800049 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" event={"ID":"94d52712-bfe2-4385-9f97-3d80f0ad0d44","Type":"ContainerStarted","Data":"1f6b22e1821ab36d1fb88c4811cefd5c85c205bf9d2c26a811bf487752a3889f"} Mar 14 06:30:01 crc kubenswrapper[4817]: I0314 06:30:01.801688 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" event={"ID":"34948651-9926-42ca-bdd6-d227b7e7797c","Type":"ContainerStarted","Data":"8289feee0d1b6d019401fc07a4231dd9c2ccb5ae0f0fcb28aff3d86033ce4888"} Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.312225 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.373807 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97rsb\" (UniqueName: \"kubernetes.io/projected/94d52712-bfe2-4385-9f97-3d80f0ad0d44-kube-api-access-97rsb\") pod \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.374774 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d52712-bfe2-4385-9f97-3d80f0ad0d44-secret-volume\") pod \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.374804 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d52712-bfe2-4385-9f97-3d80f0ad0d44-config-volume\") pod \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\" (UID: \"94d52712-bfe2-4385-9f97-3d80f0ad0d44\") " Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.375429 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d52712-bfe2-4385-9f97-3d80f0ad0d44-config-volume" (OuterVolumeSpecName: "config-volume") pod "94d52712-bfe2-4385-9f97-3d80f0ad0d44" (UID: "94d52712-bfe2-4385-9f97-3d80f0ad0d44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.381292 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d52712-bfe2-4385-9f97-3d80f0ad0d44-kube-api-access-97rsb" (OuterVolumeSpecName: "kube-api-access-97rsb") pod "94d52712-bfe2-4385-9f97-3d80f0ad0d44" (UID: "94d52712-bfe2-4385-9f97-3d80f0ad0d44"). InnerVolumeSpecName "kube-api-access-97rsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.381756 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d52712-bfe2-4385-9f97-3d80f0ad0d44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "94d52712-bfe2-4385-9f97-3d80f0ad0d44" (UID: "94d52712-bfe2-4385-9f97-3d80f0ad0d44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.477304 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97rsb\" (UniqueName: \"kubernetes.io/projected/94d52712-bfe2-4385-9f97-3d80f0ad0d44-kube-api-access-97rsb\") on node \"crc\" DevicePath \"\"" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.477350 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d52712-bfe2-4385-9f97-3d80f0ad0d44-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.477371 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d52712-bfe2-4385-9f97-3d80f0ad0d44-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.822339 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.822345 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557830-6bsvz" event={"ID":"94d52712-bfe2-4385-9f97-3d80f0ad0d44","Type":"ContainerDied","Data":"1f6b22e1821ab36d1fb88c4811cefd5c85c205bf9d2c26a811bf487752a3889f"} Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.822470 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6b22e1821ab36d1fb88c4811cefd5c85c205bf9d2c26a811bf487752a3889f" Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.824728 4817 generic.go:334] "Generic (PLEG): container finished" podID="34948651-9926-42ca-bdd6-d227b7e7797c" containerID="9e4d79cbd7e3b0a0a192eac67e4b8318817497a4cec6ed94c802f1f8804ba533" exitCode=0 Mar 14 06:30:03 crc kubenswrapper[4817]: I0314 06:30:03.824833 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" event={"ID":"34948651-9926-42ca-bdd6-d227b7e7797c","Type":"ContainerDied","Data":"9e4d79cbd7e3b0a0a192eac67e4b8318817497a4cec6ed94c802f1f8804ba533"} Mar 14 06:30:04 crc kubenswrapper[4817]: I0314 06:30:04.405276 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m"] Mar 14 06:30:04 crc kubenswrapper[4817]: I0314 06:30:04.414871 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557785-ssp4m"] Mar 14 06:30:04 crc kubenswrapper[4817]: I0314 06:30:04.746703 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f550d539-a9ab-4994-9f99-132d0bfaca8e" path="/var/lib/kubelet/pods/f550d539-a9ab-4994-9f99-132d0bfaca8e/volumes" Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.243468 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.325338 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zbxt\" (UniqueName: \"kubernetes.io/projected/34948651-9926-42ca-bdd6-d227b7e7797c-kube-api-access-2zbxt\") pod \"34948651-9926-42ca-bdd6-d227b7e7797c\" (UID: \"34948651-9926-42ca-bdd6-d227b7e7797c\") " Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.335199 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34948651-9926-42ca-bdd6-d227b7e7797c-kube-api-access-2zbxt" (OuterVolumeSpecName: "kube-api-access-2zbxt") pod "34948651-9926-42ca-bdd6-d227b7e7797c" (UID: "34948651-9926-42ca-bdd6-d227b7e7797c"). InnerVolumeSpecName "kube-api-access-2zbxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.428189 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zbxt\" (UniqueName: \"kubernetes.io/projected/34948651-9926-42ca-bdd6-d227b7e7797c-kube-api-access-2zbxt\") on node \"crc\" DevicePath \"\"" Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.845242 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" event={"ID":"34948651-9926-42ca-bdd6-d227b7e7797c","Type":"ContainerDied","Data":"8289feee0d1b6d019401fc07a4231dd9c2ccb5ae0f0fcb28aff3d86033ce4888"} Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.845314 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8289feee0d1b6d019401fc07a4231dd9c2ccb5ae0f0fcb28aff3d86033ce4888" Mar 14 06:30:05 crc kubenswrapper[4817]: I0314 06:30:05.845401 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557830-ndr9n" Mar 14 06:30:06 crc kubenswrapper[4817]: I0314 06:30:06.307456 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557824-rsgvk"] Mar 14 06:30:06 crc kubenswrapper[4817]: I0314 06:30:06.319041 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557824-rsgvk"] Mar 14 06:30:06 crc kubenswrapper[4817]: I0314 06:30:06.751280 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4879cec4-07a4-4b7e-a9be-1e4ac33977ed" path="/var/lib/kubelet/pods/4879cec4-07a4-4b7e-a9be-1e4ac33977ed/volumes" Mar 14 06:30:46 crc kubenswrapper[4817]: I0314 06:30:46.329143 4817 scope.go:117] "RemoveContainer" containerID="d25483dbf9a40a992108387e23619c0ab03f48446dacc9fd4760cd3bae13a867" Mar 14 06:30:46 crc kubenswrapper[4817]: I0314 06:30:46.384328 4817 scope.go:117] "RemoveContainer" containerID="b3df43fbc1536fc262e8ff496113bdc09bf597892243ab9147a033c8bfe1b1b1" Mar 14 06:30:52 crc kubenswrapper[4817]: I0314 06:30:52.984463 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-krqnz"] Mar 14 06:30:52 crc kubenswrapper[4817]: E0314 06:30:52.985825 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d52712-bfe2-4385-9f97-3d80f0ad0d44" containerName="collect-profiles" Mar 14 06:30:52 crc kubenswrapper[4817]: I0314 06:30:52.985861 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d52712-bfe2-4385-9f97-3d80f0ad0d44" containerName="collect-profiles" Mar 14 06:30:52 crc kubenswrapper[4817]: E0314 06:30:52.985927 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34948651-9926-42ca-bdd6-d227b7e7797c" containerName="oc" Mar 14 06:30:52 crc kubenswrapper[4817]: I0314 06:30:52.985936 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="34948651-9926-42ca-bdd6-d227b7e7797c" containerName="oc" Mar 14 06:30:52 crc kubenswrapper[4817]: I0314 06:30:52.986304 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="34948651-9926-42ca-bdd6-d227b7e7797c" containerName="oc" Mar 14 06:30:52 crc kubenswrapper[4817]: I0314 06:30:52.986325 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d52712-bfe2-4385-9f97-3d80f0ad0d44" containerName="collect-profiles" Mar 14 06:30:52 crc kubenswrapper[4817]: I0314 06:30:52.988718 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.007498 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krqnz"] Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.063338 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-utilities\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.063446 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-catalog-content\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.063562 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22dr2\" (UniqueName: \"kubernetes.io/projected/6cc7a280-5623-4d1b-b20b-d628bacce64b-kube-api-access-22dr2\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.166190 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22dr2\" (UniqueName: \"kubernetes.io/projected/6cc7a280-5623-4d1b-b20b-d628bacce64b-kube-api-access-22dr2\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.166678 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-utilities\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.166752 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-catalog-content\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.167184 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-utilities\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.167315 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-catalog-content\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.189086 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22dr2\" (UniqueName: \"kubernetes.io/projected/6cc7a280-5623-4d1b-b20b-d628bacce64b-kube-api-access-22dr2\") pod \"redhat-marketplace-krqnz\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.354007 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:30:53 crc kubenswrapper[4817]: I0314 06:30:53.867658 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-krqnz"] Mar 14 06:30:54 crc kubenswrapper[4817]: I0314 06:30:54.358362 4817 generic.go:334] "Generic (PLEG): container finished" podID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerID="9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6" exitCode=0 Mar 14 06:30:54 crc kubenswrapper[4817]: I0314 06:30:54.358509 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krqnz" event={"ID":"6cc7a280-5623-4d1b-b20b-d628bacce64b","Type":"ContainerDied","Data":"9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6"} Mar 14 06:30:54 crc kubenswrapper[4817]: I0314 06:30:54.359837 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krqnz" event={"ID":"6cc7a280-5623-4d1b-b20b-d628bacce64b","Type":"ContainerStarted","Data":"77e7f3023063264e95ad8009ba3efef656e9bc27c385e2915d6316e18bd4dcdc"} Mar 14 06:30:54 crc kubenswrapper[4817]: I0314 06:30:54.363909 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:30:55 crc kubenswrapper[4817]: I0314 06:30:55.371006 4817 generic.go:334] "Generic (PLEG): container finished" podID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerID="c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b" exitCode=0 Mar 14 06:30:55 crc kubenswrapper[4817]: I0314 06:30:55.371114 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krqnz" event={"ID":"6cc7a280-5623-4d1b-b20b-d628bacce64b","Type":"ContainerDied","Data":"c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b"} Mar 14 06:30:56 crc kubenswrapper[4817]: I0314 06:30:56.393998 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krqnz" event={"ID":"6cc7a280-5623-4d1b-b20b-d628bacce64b","Type":"ContainerStarted","Data":"4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91"} Mar 14 06:30:56 crc kubenswrapper[4817]: I0314 06:30:56.431639 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-krqnz" podStartSLOduration=3.027401801 podStartE2EDuration="4.431608589s" podCreationTimestamp="2026-03-14 06:30:52 +0000 UTC" firstStartedPulling="2026-03-14 06:30:54.363569466 +0000 UTC m=+3508.401830212" lastFinishedPulling="2026-03-14 06:30:55.767776254 +0000 UTC m=+3509.806037000" observedRunningTime="2026-03-14 06:30:56.415086569 +0000 UTC m=+3510.453347325" watchObservedRunningTime="2026-03-14 06:30:56.431608589 +0000 UTC m=+3510.469869345" Mar 14 06:31:03 crc kubenswrapper[4817]: I0314 06:31:03.354983 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:31:03 crc kubenswrapper[4817]: I0314 06:31:03.355488 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:31:03 crc kubenswrapper[4817]: I0314 06:31:03.412615 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:31:03 crc kubenswrapper[4817]: I0314 06:31:03.511631 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:31:03 crc kubenswrapper[4817]: I0314 06:31:03.652478 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krqnz"] Mar 14 06:31:05 crc kubenswrapper[4817]: I0314 06:31:05.486665 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-krqnz" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="registry-server" containerID="cri-o://4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91" gracePeriod=2 Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.097963 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.252723 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-catalog-content\") pod \"6cc7a280-5623-4d1b-b20b-d628bacce64b\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.252835 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22dr2\" (UniqueName: \"kubernetes.io/projected/6cc7a280-5623-4d1b-b20b-d628bacce64b-kube-api-access-22dr2\") pod \"6cc7a280-5623-4d1b-b20b-d628bacce64b\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.253147 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-utilities\") pod \"6cc7a280-5623-4d1b-b20b-d628bacce64b\" (UID: \"6cc7a280-5623-4d1b-b20b-d628bacce64b\") " Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.254231 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-utilities" (OuterVolumeSpecName: "utilities") pod "6cc7a280-5623-4d1b-b20b-d628bacce64b" (UID: "6cc7a280-5623-4d1b-b20b-d628bacce64b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.254837 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.264083 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc7a280-5623-4d1b-b20b-d628bacce64b-kube-api-access-22dr2" (OuterVolumeSpecName: "kube-api-access-22dr2") pod "6cc7a280-5623-4d1b-b20b-d628bacce64b" (UID: "6cc7a280-5623-4d1b-b20b-d628bacce64b"). InnerVolumeSpecName "kube-api-access-22dr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.283013 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cc7a280-5623-4d1b-b20b-d628bacce64b" (UID: "6cc7a280-5623-4d1b-b20b-d628bacce64b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.357075 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc7a280-5623-4d1b-b20b-d628bacce64b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.357118 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22dr2\" (UniqueName: \"kubernetes.io/projected/6cc7a280-5623-4d1b-b20b-d628bacce64b-kube-api-access-22dr2\") on node \"crc\" DevicePath \"\"" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.497286 4817 generic.go:334] "Generic (PLEG): container finished" podID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerID="4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91" exitCode=0 Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.497336 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krqnz" event={"ID":"6cc7a280-5623-4d1b-b20b-d628bacce64b","Type":"ContainerDied","Data":"4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91"} Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.497362 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-krqnz" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.497380 4817 scope.go:117] "RemoveContainer" containerID="4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.497366 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-krqnz" event={"ID":"6cc7a280-5623-4d1b-b20b-d628bacce64b","Type":"ContainerDied","Data":"77e7f3023063264e95ad8009ba3efef656e9bc27c385e2915d6316e18bd4dcdc"} Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.520706 4817 scope.go:117] "RemoveContainer" containerID="c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.531460 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-krqnz"] Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.540664 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-krqnz"] Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.549274 4817 scope.go:117] "RemoveContainer" containerID="9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.592004 4817 scope.go:117] "RemoveContainer" containerID="4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91" Mar 14 06:31:06 crc kubenswrapper[4817]: E0314 06:31:06.592631 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91\": container with ID starting with 4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91 not found: ID does not exist" containerID="4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.592693 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91"} err="failed to get container status \"4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91\": rpc error: code = NotFound desc = could not find container \"4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91\": container with ID starting with 4b7e85ae33b5db9c53af4f3cee0af51c7f92f2fae22ca549463fa8e4519b1e91 not found: ID does not exist" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.592731 4817 scope.go:117] "RemoveContainer" containerID="c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b" Mar 14 06:31:06 crc kubenswrapper[4817]: E0314 06:31:06.593315 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b\": container with ID starting with c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b not found: ID does not exist" containerID="c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.593375 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b"} err="failed to get container status \"c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b\": rpc error: code = NotFound desc = could not find container \"c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b\": container with ID starting with c267b583629e22a5d7310d05d3fe6655cb82d7148971782d811ec6e80818661b not found: ID does not exist" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.593406 4817 scope.go:117] "RemoveContainer" containerID="9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6" Mar 14 06:31:06 crc kubenswrapper[4817]: E0314 06:31:06.593726 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6\": container with ID starting with 9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6 not found: ID does not exist" containerID="9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.593759 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6"} err="failed to get container status \"9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6\": rpc error: code = NotFound desc = could not find container \"9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6\": container with ID starting with 9bfb058dca5612ec1ad4ca8a3de51d04d6394448b0de0f2ab2aed1e426cc10e6 not found: ID does not exist" Mar 14 06:31:06 crc kubenswrapper[4817]: I0314 06:31:06.746854 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" path="/var/lib/kubelet/pods/6cc7a280-5623-4d1b-b20b-d628bacce64b/volumes" Mar 14 06:31:08 crc kubenswrapper[4817]: I0314 06:31:08.565360 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:31:08 crc kubenswrapper[4817]: I0314 06:31:08.565809 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.416264 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mwx9"] Mar 14 06:31:27 crc kubenswrapper[4817]: E0314 06:31:27.417345 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="registry-server" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.417361 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="registry-server" Mar 14 06:31:27 crc kubenswrapper[4817]: E0314 06:31:27.417389 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="extract-content" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.417396 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="extract-content" Mar 14 06:31:27 crc kubenswrapper[4817]: E0314 06:31:27.417404 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="extract-utilities" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.417410 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="extract-utilities" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.417599 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc7a280-5623-4d1b-b20b-d628bacce64b" containerName="registry-server" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.418861 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.436802 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mwx9"] Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.506121 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8tp\" (UniqueName: \"kubernetes.io/projected/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-kube-api-access-dv8tp\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.506314 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-utilities\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.506580 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-catalog-content\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.608420 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-utilities\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.608541 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-catalog-content\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.608607 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8tp\" (UniqueName: \"kubernetes.io/projected/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-kube-api-access-dv8tp\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.608976 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-utilities\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.609146 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-catalog-content\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.629070 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8tp\" (UniqueName: \"kubernetes.io/projected/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-kube-api-access-dv8tp\") pod \"community-operators-4mwx9\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:27 crc kubenswrapper[4817]: I0314 06:31:27.739961 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:28 crc kubenswrapper[4817]: I0314 06:31:28.227851 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mwx9"] Mar 14 06:31:28 crc kubenswrapper[4817]: I0314 06:31:28.701195 4817 generic.go:334] "Generic (PLEG): container finished" podID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerID="988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90" exitCode=0 Mar 14 06:31:28 crc kubenswrapper[4817]: I0314 06:31:28.701458 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerDied","Data":"988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90"} Mar 14 06:31:28 crc kubenswrapper[4817]: I0314 06:31:28.701489 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerStarted","Data":"498a4cd0e4fb5d4dc1ff2d5fd62b1c7dee3d8ddcb04f3e415611b357bcf0f45a"} Mar 14 06:31:29 crc kubenswrapper[4817]: I0314 06:31:29.711658 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerStarted","Data":"11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec"} Mar 14 06:31:31 crc kubenswrapper[4817]: I0314 06:31:31.754550 4817 generic.go:334] "Generic (PLEG): container finished" podID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerID="11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec" exitCode=0 Mar 14 06:31:31 crc kubenswrapper[4817]: I0314 06:31:31.754645 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerDied","Data":"11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec"} Mar 14 06:31:32 crc kubenswrapper[4817]: I0314 06:31:32.766922 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerStarted","Data":"eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7"} Mar 14 06:31:32 crc kubenswrapper[4817]: I0314 06:31:32.789798 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mwx9" podStartSLOduration=2.085529719 podStartE2EDuration="5.789782134s" podCreationTimestamp="2026-03-14 06:31:27 +0000 UTC" firstStartedPulling="2026-03-14 06:31:28.703131265 +0000 UTC m=+3542.741392011" lastFinishedPulling="2026-03-14 06:31:32.40738368 +0000 UTC m=+3546.445644426" observedRunningTime="2026-03-14 06:31:32.787105388 +0000 UTC m=+3546.825366124" watchObservedRunningTime="2026-03-14 06:31:32.789782134 +0000 UTC m=+3546.828042880" Mar 14 06:31:37 crc kubenswrapper[4817]: I0314 06:31:37.740599 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:37 crc kubenswrapper[4817]: I0314 06:31:37.741409 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:38 crc kubenswrapper[4817]: I0314 06:31:38.566003 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:31:38 crc kubenswrapper[4817]: I0314 06:31:38.566074 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:31:38 crc kubenswrapper[4817]: I0314 06:31:38.788626 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4mwx9" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="registry-server" probeResult="failure" output=< Mar 14 06:31:38 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 06:31:38 crc kubenswrapper[4817]: > Mar 14 06:31:47 crc kubenswrapper[4817]: I0314 06:31:47.815349 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:47 crc kubenswrapper[4817]: I0314 06:31:47.872443 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:48 crc kubenswrapper[4817]: I0314 06:31:48.065065 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mwx9"] Mar 14 06:31:48 crc kubenswrapper[4817]: I0314 06:31:48.906351 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mwx9" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="registry-server" containerID="cri-o://eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7" gracePeriod=2 Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.543555 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.593872 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-catalog-content\") pod \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.594027 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-utilities\") pod \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.594062 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv8tp\" (UniqueName: \"kubernetes.io/projected/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-kube-api-access-dv8tp\") pod \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\" (UID: \"8d557748-adc4-49cc-a01d-d2cb2a2a7b24\") " Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.595460 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-utilities" (OuterVolumeSpecName: "utilities") pod "8d557748-adc4-49cc-a01d-d2cb2a2a7b24" (UID: "8d557748-adc4-49cc-a01d-d2cb2a2a7b24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.600874 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-kube-api-access-dv8tp" (OuterVolumeSpecName: "kube-api-access-dv8tp") pod "8d557748-adc4-49cc-a01d-d2cb2a2a7b24" (UID: "8d557748-adc4-49cc-a01d-d2cb2a2a7b24"). InnerVolumeSpecName "kube-api-access-dv8tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.656860 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d557748-adc4-49cc-a01d-d2cb2a2a7b24" (UID: "8d557748-adc4-49cc-a01d-d2cb2a2a7b24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.697347 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.697395 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.697409 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv8tp\" (UniqueName: \"kubernetes.io/projected/8d557748-adc4-49cc-a01d-d2cb2a2a7b24-kube-api-access-dv8tp\") on node \"crc\" DevicePath \"\"" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.920450 4817 generic.go:334] "Generic (PLEG): container finished" podID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerID="eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7" exitCode=0 Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.920507 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerDied","Data":"eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7"} Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.920545 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mwx9" event={"ID":"8d557748-adc4-49cc-a01d-d2cb2a2a7b24","Type":"ContainerDied","Data":"498a4cd0e4fb5d4dc1ff2d5fd62b1c7dee3d8ddcb04f3e415611b357bcf0f45a"} Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.920565 4817 scope.go:117] "RemoveContainer" containerID="eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.920684 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mwx9" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.963513 4817 scope.go:117] "RemoveContainer" containerID="11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec" Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.975821 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mwx9"] Mar 14 06:31:49 crc kubenswrapper[4817]: I0314 06:31:49.986774 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mwx9"] Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.019016 4817 scope.go:117] "RemoveContainer" containerID="988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.080598 4817 scope.go:117] "RemoveContainer" containerID="eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7" Mar 14 06:31:50 crc kubenswrapper[4817]: E0314 06:31:50.081810 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7\": container with ID starting with eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7 not found: ID does not exist" containerID="eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.081855 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7"} err="failed to get container status \"eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7\": rpc error: code = NotFound desc = could not find container \"eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7\": container with ID starting with eb8a1e1d428bdd8f87a8cf4eb27ba58058a3d606336012482dbd86e6a850d6e7 not found: ID does not exist" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.081882 4817 scope.go:117] "RemoveContainer" containerID="11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec" Mar 14 06:31:50 crc kubenswrapper[4817]: E0314 06:31:50.087498 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec\": container with ID starting with 11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec not found: ID does not exist" containerID="11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.087549 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec"} err="failed to get container status \"11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec\": rpc error: code = NotFound desc = could not find container \"11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec\": container with ID starting with 11c6dda9e4bf7f19f761a282a72acddd730d43497fc266f0992db71543651dec not found: ID does not exist" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.087583 4817 scope.go:117] "RemoveContainer" containerID="988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90" Mar 14 06:31:50 crc kubenswrapper[4817]: E0314 06:31:50.088250 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90\": container with ID starting with 988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90 not found: ID does not exist" containerID="988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.088305 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90"} err="failed to get container status \"988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90\": rpc error: code = NotFound desc = could not find container \"988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90\": container with ID starting with 988207b50bcb6ec3d93a139ec053e8c420e7d45a1a0d94940ec00b9145d1db90 not found: ID does not exist" Mar 14 06:31:50 crc kubenswrapper[4817]: I0314 06:31:50.751122 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" path="/var/lib/kubelet/pods/8d557748-adc4-49cc-a01d-d2cb2a2a7b24/volumes" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.154887 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557832-mn6l2"] Mar 14 06:32:00 crc kubenswrapper[4817]: E0314 06:32:00.155917 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="registry-server" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.155932 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="registry-server" Mar 14 06:32:00 crc kubenswrapper[4817]: E0314 06:32:00.155959 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="extract-utilities" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.155966 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="extract-utilities" Mar 14 06:32:00 crc kubenswrapper[4817]: E0314 06:32:00.155996 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="extract-content" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.156005 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="extract-content" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.156207 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d557748-adc4-49cc-a01d-d2cb2a2a7b24" containerName="registry-server" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.156980 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.160333 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.160675 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.160946 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.170729 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557832-mn6l2"] Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.224225 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99qw\" (UniqueName: \"kubernetes.io/projected/543ebca0-20cb-43b0-ad71-75dc8128c7b6-kube-api-access-g99qw\") pod \"auto-csr-approver-29557832-mn6l2\" (UID: \"543ebca0-20cb-43b0-ad71-75dc8128c7b6\") " pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.327325 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99qw\" (UniqueName: \"kubernetes.io/projected/543ebca0-20cb-43b0-ad71-75dc8128c7b6-kube-api-access-g99qw\") pod \"auto-csr-approver-29557832-mn6l2\" (UID: \"543ebca0-20cb-43b0-ad71-75dc8128c7b6\") " pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.347952 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99qw\" (UniqueName: \"kubernetes.io/projected/543ebca0-20cb-43b0-ad71-75dc8128c7b6-kube-api-access-g99qw\") pod \"auto-csr-approver-29557832-mn6l2\" (UID: \"543ebca0-20cb-43b0-ad71-75dc8128c7b6\") " pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:00 crc kubenswrapper[4817]: I0314 06:32:00.485237 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:01 crc kubenswrapper[4817]: I0314 06:32:01.005786 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557832-mn6l2"] Mar 14 06:32:01 crc kubenswrapper[4817]: I0314 06:32:01.017320 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" event={"ID":"543ebca0-20cb-43b0-ad71-75dc8128c7b6","Type":"ContainerStarted","Data":"96ffb598ade785ee5654f0b433607a7cf13d18f72ae3c4b0ad18b73a082d6c93"} Mar 14 06:32:03 crc kubenswrapper[4817]: I0314 06:32:03.042957 4817 generic.go:334] "Generic (PLEG): container finished" podID="543ebca0-20cb-43b0-ad71-75dc8128c7b6" containerID="fb2efde3d9fb4b66c517ed6d3dfb291f0c78915cfcbc2e2d3fa01af8302fe2fe" exitCode=0 Mar 14 06:32:03 crc kubenswrapper[4817]: I0314 06:32:03.043026 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" event={"ID":"543ebca0-20cb-43b0-ad71-75dc8128c7b6","Type":"ContainerDied","Data":"fb2efde3d9fb4b66c517ed6d3dfb291f0c78915cfcbc2e2d3fa01af8302fe2fe"} Mar 14 06:32:04 crc kubenswrapper[4817]: I0314 06:32:04.487913 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:04 crc kubenswrapper[4817]: I0314 06:32:04.528880 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g99qw\" (UniqueName: \"kubernetes.io/projected/543ebca0-20cb-43b0-ad71-75dc8128c7b6-kube-api-access-g99qw\") pod \"543ebca0-20cb-43b0-ad71-75dc8128c7b6\" (UID: \"543ebca0-20cb-43b0-ad71-75dc8128c7b6\") " Mar 14 06:32:04 crc kubenswrapper[4817]: I0314 06:32:04.539407 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543ebca0-20cb-43b0-ad71-75dc8128c7b6-kube-api-access-g99qw" (OuterVolumeSpecName: "kube-api-access-g99qw") pod "543ebca0-20cb-43b0-ad71-75dc8128c7b6" (UID: "543ebca0-20cb-43b0-ad71-75dc8128c7b6"). InnerVolumeSpecName "kube-api-access-g99qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:32:04 crc kubenswrapper[4817]: I0314 06:32:04.631265 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g99qw\" (UniqueName: \"kubernetes.io/projected/543ebca0-20cb-43b0-ad71-75dc8128c7b6-kube-api-access-g99qw\") on node \"crc\" DevicePath \"\"" Mar 14 06:32:05 crc kubenswrapper[4817]: I0314 06:32:05.065965 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" event={"ID":"543ebca0-20cb-43b0-ad71-75dc8128c7b6","Type":"ContainerDied","Data":"96ffb598ade785ee5654f0b433607a7cf13d18f72ae3c4b0ad18b73a082d6c93"} Mar 14 06:32:05 crc kubenswrapper[4817]: I0314 06:32:05.066020 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ffb598ade785ee5654f0b433607a7cf13d18f72ae3c4b0ad18b73a082d6c93" Mar 14 06:32:05 crc kubenswrapper[4817]: I0314 06:32:05.066047 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557832-mn6l2" Mar 14 06:32:05 crc kubenswrapper[4817]: I0314 06:32:05.572808 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557826-m659g"] Mar 14 06:32:05 crc kubenswrapper[4817]: I0314 06:32:05.580753 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557826-m659g"] Mar 14 06:32:06 crc kubenswrapper[4817]: I0314 06:32:06.769327 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd80be70-77d4-4e83-9348-ef11ecbc9a52" path="/var/lib/kubelet/pods/fd80be70-77d4-4e83-9348-ef11ecbc9a52/volumes" Mar 14 06:32:08 crc kubenswrapper[4817]: I0314 06:32:08.565691 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:32:08 crc kubenswrapper[4817]: I0314 06:32:08.566119 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:32:08 crc kubenswrapper[4817]: I0314 06:32:08.566184 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:32:08 crc kubenswrapper[4817]: I0314 06:32:08.567274 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d569fb6c3ed877b8474879dfe81b9cdc9ebaabb0fcc1493502863fb34518ab89"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:32:08 crc kubenswrapper[4817]: I0314 06:32:08.567357 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://d569fb6c3ed877b8474879dfe81b9cdc9ebaabb0fcc1493502863fb34518ab89" gracePeriod=600 Mar 14 06:32:09 crc kubenswrapper[4817]: I0314 06:32:09.102984 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="d569fb6c3ed877b8474879dfe81b9cdc9ebaabb0fcc1493502863fb34518ab89" exitCode=0 Mar 14 06:32:09 crc kubenswrapper[4817]: I0314 06:32:09.103305 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"d569fb6c3ed877b8474879dfe81b9cdc9ebaabb0fcc1493502863fb34518ab89"} Mar 14 06:32:09 crc kubenswrapper[4817]: I0314 06:32:09.103334 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f"} Mar 14 06:32:09 crc kubenswrapper[4817]: I0314 06:32:09.103356 4817 scope.go:117] "RemoveContainer" containerID="17e56ba2a147be3fd2011384b03b85b6a9359c2ebcdddf97259cfcccb05ae537" Mar 14 06:32:46 crc kubenswrapper[4817]: I0314 06:32:46.527941 4817 scope.go:117] "RemoveContainer" containerID="7cc72739a245b3ab369c36b2b4d4d39b8d1ee52e335cb6a0f86641dcbb664121" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.147606 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557834-sw67r"] Mar 14 06:34:00 crc kubenswrapper[4817]: E0314 06:34:00.148712 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543ebca0-20cb-43b0-ad71-75dc8128c7b6" containerName="oc" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.148730 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="543ebca0-20cb-43b0-ad71-75dc8128c7b6" containerName="oc" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.148964 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="543ebca0-20cb-43b0-ad71-75dc8128c7b6" containerName="oc" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.149758 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.152061 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.152412 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.152540 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.177760 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557834-sw67r"] Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.274024 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg7zk\" (UniqueName: \"kubernetes.io/projected/8140bc3d-41a5-499c-b797-0e6d19addd0b-kube-api-access-vg7zk\") pod \"auto-csr-approver-29557834-sw67r\" (UID: \"8140bc3d-41a5-499c-b797-0e6d19addd0b\") " pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.375832 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg7zk\" (UniqueName: \"kubernetes.io/projected/8140bc3d-41a5-499c-b797-0e6d19addd0b-kube-api-access-vg7zk\") pod \"auto-csr-approver-29557834-sw67r\" (UID: \"8140bc3d-41a5-499c-b797-0e6d19addd0b\") " pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.407669 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg7zk\" (UniqueName: \"kubernetes.io/projected/8140bc3d-41a5-499c-b797-0e6d19addd0b-kube-api-access-vg7zk\") pod \"auto-csr-approver-29557834-sw67r\" (UID: \"8140bc3d-41a5-499c-b797-0e6d19addd0b\") " pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.477438 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:00 crc kubenswrapper[4817]: I0314 06:34:00.962989 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557834-sw67r"] Mar 14 06:34:01 crc kubenswrapper[4817]: I0314 06:34:01.163106 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557834-sw67r" event={"ID":"8140bc3d-41a5-499c-b797-0e6d19addd0b","Type":"ContainerStarted","Data":"b8508ea8b29b9e8982cd095764856cc24fec90085e8bc2db139417d9781001e8"} Mar 14 06:34:02 crc kubenswrapper[4817]: I0314 06:34:02.174654 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557834-sw67r" event={"ID":"8140bc3d-41a5-499c-b797-0e6d19addd0b","Type":"ContainerStarted","Data":"9fdd7f87f1ad3f5e2b3a4febe518b45e91aef26cb3a72cd3ff9034dc1fa23215"} Mar 14 06:34:03 crc kubenswrapper[4817]: I0314 06:34:03.188243 4817 generic.go:334] "Generic (PLEG): container finished" podID="8140bc3d-41a5-499c-b797-0e6d19addd0b" containerID="9fdd7f87f1ad3f5e2b3a4febe518b45e91aef26cb3a72cd3ff9034dc1fa23215" exitCode=0 Mar 14 06:34:03 crc kubenswrapper[4817]: I0314 06:34:03.188300 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557834-sw67r" event={"ID":"8140bc3d-41a5-499c-b797-0e6d19addd0b","Type":"ContainerDied","Data":"9fdd7f87f1ad3f5e2b3a4febe518b45e91aef26cb3a72cd3ff9034dc1fa23215"} Mar 14 06:34:04 crc kubenswrapper[4817]: I0314 06:34:04.655993 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:04 crc kubenswrapper[4817]: I0314 06:34:04.773440 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg7zk\" (UniqueName: \"kubernetes.io/projected/8140bc3d-41a5-499c-b797-0e6d19addd0b-kube-api-access-vg7zk\") pod \"8140bc3d-41a5-499c-b797-0e6d19addd0b\" (UID: \"8140bc3d-41a5-499c-b797-0e6d19addd0b\") " Mar 14 06:34:04 crc kubenswrapper[4817]: I0314 06:34:04.780641 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8140bc3d-41a5-499c-b797-0e6d19addd0b-kube-api-access-vg7zk" (OuterVolumeSpecName: "kube-api-access-vg7zk") pod "8140bc3d-41a5-499c-b797-0e6d19addd0b" (UID: "8140bc3d-41a5-499c-b797-0e6d19addd0b"). InnerVolumeSpecName "kube-api-access-vg7zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:34:04 crc kubenswrapper[4817]: I0314 06:34:04.877904 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg7zk\" (UniqueName: \"kubernetes.io/projected/8140bc3d-41a5-499c-b797-0e6d19addd0b-kube-api-access-vg7zk\") on node \"crc\" DevicePath \"\"" Mar 14 06:34:05 crc kubenswrapper[4817]: I0314 06:34:05.209365 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557834-sw67r" event={"ID":"8140bc3d-41a5-499c-b797-0e6d19addd0b","Type":"ContainerDied","Data":"b8508ea8b29b9e8982cd095764856cc24fec90085e8bc2db139417d9781001e8"} Mar 14 06:34:05 crc kubenswrapper[4817]: I0314 06:34:05.209925 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8508ea8b29b9e8982cd095764856cc24fec90085e8bc2db139417d9781001e8" Mar 14 06:34:05 crc kubenswrapper[4817]: I0314 06:34:05.209442 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557834-sw67r" Mar 14 06:34:05 crc kubenswrapper[4817]: I0314 06:34:05.275921 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557828-kp9sx"] Mar 14 06:34:05 crc kubenswrapper[4817]: I0314 06:34:05.284557 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557828-kp9sx"] Mar 14 06:34:06 crc kubenswrapper[4817]: I0314 06:34:06.745647 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4" path="/var/lib/kubelet/pods/8cb8a39e-13c1-4ff0-b5a5-f281ff4617b4/volumes" Mar 14 06:34:08 crc kubenswrapper[4817]: I0314 06:34:08.566040 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:34:08 crc kubenswrapper[4817]: I0314 06:34:08.566376 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.817578 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5bgmn"] Mar 14 06:34:12 crc kubenswrapper[4817]: E0314 06:34:12.818434 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8140bc3d-41a5-499c-b797-0e6d19addd0b" containerName="oc" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.818453 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8140bc3d-41a5-499c-b797-0e6d19addd0b" containerName="oc" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.818698 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8140bc3d-41a5-499c-b797-0e6d19addd0b" containerName="oc" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.820499 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.827280 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bgmn"] Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.955688 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6njx\" (UniqueName: \"kubernetes.io/projected/62c16825-70c9-443e-9f99-0aa60dddb7a0-kube-api-access-j6njx\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.955856 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-catalog-content\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:12 crc kubenswrapper[4817]: I0314 06:34:12.955927 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-utilities\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.058827 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6njx\" (UniqueName: \"kubernetes.io/projected/62c16825-70c9-443e-9f99-0aa60dddb7a0-kube-api-access-j6njx\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.058927 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-catalog-content\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.058955 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-utilities\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.059589 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-utilities\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.059617 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-catalog-content\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.080098 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6njx\" (UniqueName: \"kubernetes.io/projected/62c16825-70c9-443e-9f99-0aa60dddb7a0-kube-api-access-j6njx\") pod \"certified-operators-5bgmn\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.145277 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:13 crc kubenswrapper[4817]: I0314 06:34:13.694836 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bgmn"] Mar 14 06:34:14 crc kubenswrapper[4817]: I0314 06:34:14.321805 4817 generic.go:334] "Generic (PLEG): container finished" podID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerID="8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572" exitCode=0 Mar 14 06:34:14 crc kubenswrapper[4817]: I0314 06:34:14.322073 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bgmn" event={"ID":"62c16825-70c9-443e-9f99-0aa60dddb7a0","Type":"ContainerDied","Data":"8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572"} Mar 14 06:34:14 crc kubenswrapper[4817]: I0314 06:34:14.322170 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bgmn" event={"ID":"62c16825-70c9-443e-9f99-0aa60dddb7a0","Type":"ContainerStarted","Data":"25c9a25239f15029ca47cb1032fd432130b6beaf2efcc462ee96cd09e715e769"} Mar 14 06:34:16 crc kubenswrapper[4817]: I0314 06:34:16.344233 4817 generic.go:334] "Generic (PLEG): container finished" podID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerID="2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da" exitCode=0 Mar 14 06:34:16 crc kubenswrapper[4817]: I0314 06:34:16.344336 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bgmn" event={"ID":"62c16825-70c9-443e-9f99-0aa60dddb7a0","Type":"ContainerDied","Data":"2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da"} Mar 14 06:34:17 crc kubenswrapper[4817]: I0314 06:34:17.363495 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bgmn" event={"ID":"62c16825-70c9-443e-9f99-0aa60dddb7a0","Type":"ContainerStarted","Data":"dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f"} Mar 14 06:34:17 crc kubenswrapper[4817]: I0314 06:34:17.382193 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5bgmn" podStartSLOduration=2.881695755 podStartE2EDuration="5.382174496s" podCreationTimestamp="2026-03-14 06:34:12 +0000 UTC" firstStartedPulling="2026-03-14 06:34:14.323701442 +0000 UTC m=+3708.361962198" lastFinishedPulling="2026-03-14 06:34:16.824180183 +0000 UTC m=+3710.862440939" observedRunningTime="2026-03-14 06:34:17.381489516 +0000 UTC m=+3711.419750282" watchObservedRunningTime="2026-03-14 06:34:17.382174496 +0000 UTC m=+3711.420435242" Mar 14 06:34:22 crc kubenswrapper[4817]: I0314 06:34:22.049998 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-30bc-account-create-update-d8bsz"] Mar 14 06:34:22 crc kubenswrapper[4817]: I0314 06:34:22.060194 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-9wfp5"] Mar 14 06:34:22 crc kubenswrapper[4817]: I0314 06:34:22.070577 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-30bc-account-create-update-d8bsz"] Mar 14 06:34:22 crc kubenswrapper[4817]: I0314 06:34:22.079315 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-9wfp5"] Mar 14 06:34:22 crc kubenswrapper[4817]: I0314 06:34:22.752073 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986e1019-bba0-4ef2-9b54-0929a563895b" path="/var/lib/kubelet/pods/986e1019-bba0-4ef2-9b54-0929a563895b/volumes" Mar 14 06:34:22 crc kubenswrapper[4817]: I0314 06:34:22.753294 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d31186-90a1-4a20-a042-417e1ed712c6" path="/var/lib/kubelet/pods/e4d31186-90a1-4a20-a042-417e1ed712c6/volumes" Mar 14 06:34:23 crc kubenswrapper[4817]: I0314 06:34:23.147073 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:23 crc kubenswrapper[4817]: I0314 06:34:23.147504 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:23 crc kubenswrapper[4817]: I0314 06:34:23.232051 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:23 crc kubenswrapper[4817]: I0314 06:34:23.480751 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:23 crc kubenswrapper[4817]: I0314 06:34:23.538971 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bgmn"] Mar 14 06:34:25 crc kubenswrapper[4817]: I0314 06:34:25.442630 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5bgmn" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="registry-server" containerID="cri-o://dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f" gracePeriod=2 Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.017959 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.128072 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6njx\" (UniqueName: \"kubernetes.io/projected/62c16825-70c9-443e-9f99-0aa60dddb7a0-kube-api-access-j6njx\") pod \"62c16825-70c9-443e-9f99-0aa60dddb7a0\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.128288 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-catalog-content\") pod \"62c16825-70c9-443e-9f99-0aa60dddb7a0\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.128388 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-utilities\") pod \"62c16825-70c9-443e-9f99-0aa60dddb7a0\" (UID: \"62c16825-70c9-443e-9f99-0aa60dddb7a0\") " Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.130288 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-utilities" (OuterVolumeSpecName: "utilities") pod "62c16825-70c9-443e-9f99-0aa60dddb7a0" (UID: "62c16825-70c9-443e-9f99-0aa60dddb7a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.139328 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c16825-70c9-443e-9f99-0aa60dddb7a0-kube-api-access-j6njx" (OuterVolumeSpecName: "kube-api-access-j6njx") pod "62c16825-70c9-443e-9f99-0aa60dddb7a0" (UID: "62c16825-70c9-443e-9f99-0aa60dddb7a0"). InnerVolumeSpecName "kube-api-access-j6njx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.182434 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62c16825-70c9-443e-9f99-0aa60dddb7a0" (UID: "62c16825-70c9-443e-9f99-0aa60dddb7a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.230969 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6njx\" (UniqueName: \"kubernetes.io/projected/62c16825-70c9-443e-9f99-0aa60dddb7a0-kube-api-access-j6njx\") on node \"crc\" DevicePath \"\"" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.231012 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.231023 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62c16825-70c9-443e-9f99-0aa60dddb7a0-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.455605 4817 generic.go:334] "Generic (PLEG): container finished" podID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerID="dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f" exitCode=0 Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.455674 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bgmn" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.455693 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bgmn" event={"ID":"62c16825-70c9-443e-9f99-0aa60dddb7a0","Type":"ContainerDied","Data":"dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f"} Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.456121 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bgmn" event={"ID":"62c16825-70c9-443e-9f99-0aa60dddb7a0","Type":"ContainerDied","Data":"25c9a25239f15029ca47cb1032fd432130b6beaf2efcc462ee96cd09e715e769"} Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.456140 4817 scope.go:117] "RemoveContainer" containerID="dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.479630 4817 scope.go:117] "RemoveContainer" containerID="2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.498979 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bgmn"] Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.500501 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5bgmn"] Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.518986 4817 scope.go:117] "RemoveContainer" containerID="8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.563324 4817 scope.go:117] "RemoveContainer" containerID="dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f" Mar 14 06:34:26 crc kubenswrapper[4817]: E0314 06:34:26.564468 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f\": container with ID starting with dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f not found: ID does not exist" containerID="dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.564527 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f"} err="failed to get container status \"dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f\": rpc error: code = NotFound desc = could not find container \"dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f\": container with ID starting with dfa4b31f3437f9ea4f36106d4f1fd59e5ab6a5fbee5b72b202a856fb374f382f not found: ID does not exist" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.564562 4817 scope.go:117] "RemoveContainer" containerID="2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da" Mar 14 06:34:26 crc kubenswrapper[4817]: E0314 06:34:26.565106 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da\": container with ID starting with 2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da not found: ID does not exist" containerID="2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.565142 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da"} err="failed to get container status \"2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da\": rpc error: code = NotFound desc = could not find container \"2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da\": container with ID starting with 2fe79d84c41aea957d9fe166bbb44993c434f71d4da07024c621921d72e0a2da not found: ID does not exist" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.565161 4817 scope.go:117] "RemoveContainer" containerID="8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572" Mar 14 06:34:26 crc kubenswrapper[4817]: E0314 06:34:26.565432 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572\": container with ID starting with 8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572 not found: ID does not exist" containerID="8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.565464 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572"} err="failed to get container status \"8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572\": rpc error: code = NotFound desc = could not find container \"8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572\": container with ID starting with 8a081b8ef97b3a51d63d623651aed3c0c3ee6991851e0f817e50a1d7a90a0572 not found: ID does not exist" Mar 14 06:34:26 crc kubenswrapper[4817]: I0314 06:34:26.746883 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" path="/var/lib/kubelet/pods/62c16825-70c9-443e-9f99-0aa60dddb7a0/volumes" Mar 14 06:34:38 crc kubenswrapper[4817]: I0314 06:34:38.565880 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:34:38 crc kubenswrapper[4817]: I0314 06:34:38.566452 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:34:46 crc kubenswrapper[4817]: I0314 06:34:46.659030 4817 scope.go:117] "RemoveContainer" containerID="41ebf9ccb10c7a2e2ed984c8ba16824c4c89733f84f3566fcf02ed3d9b22a5fd" Mar 14 06:34:46 crc kubenswrapper[4817]: I0314 06:34:46.683115 4817 scope.go:117] "RemoveContainer" containerID="4b4c79f96e16812f9625ed94592b3ae15b42ae76beb21f749e53e52b413e8159" Mar 14 06:34:46 crc kubenswrapper[4817]: I0314 06:34:46.745411 4817 scope.go:117] "RemoveContainer" containerID="d77df3d0558931ca3fd400c6782cd0b66dc01d92b88c6647e8771ccd3b427e10" Mar 14 06:34:52 crc kubenswrapper[4817]: I0314 06:34:52.068374 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-k845r"] Mar 14 06:34:52 crc kubenswrapper[4817]: I0314 06:34:52.079618 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-k845r"] Mar 14 06:34:52 crc kubenswrapper[4817]: I0314 06:34:52.744511 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0b273c-0bba-481a-85db-ce740bae29d2" path="/var/lib/kubelet/pods/2f0b273c-0bba-481a-85db-ce740bae29d2/volumes" Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.565825 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.566493 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.566553 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.567275 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.567320 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" gracePeriod=600 Mar 14 06:35:08 crc kubenswrapper[4817]: E0314 06:35:08.690993 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.853999 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" exitCode=0 Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.854073 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f"} Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.854197 4817 scope.go:117] "RemoveContainer" containerID="d569fb6c3ed877b8474879dfe81b9cdc9ebaabb0fcc1493502863fb34518ab89" Mar 14 06:35:08 crc kubenswrapper[4817]: I0314 06:35:08.855305 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:35:08 crc kubenswrapper[4817]: E0314 06:35:08.855580 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:35:23 crc kubenswrapper[4817]: I0314 06:35:23.733032 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:35:23 crc kubenswrapper[4817]: E0314 06:35:23.733851 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:35:38 crc kubenswrapper[4817]: I0314 06:35:38.732585 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:35:38 crc kubenswrapper[4817]: E0314 06:35:38.733637 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:35:46 crc kubenswrapper[4817]: I0314 06:35:46.889389 4817 scope.go:117] "RemoveContainer" containerID="c53cb3f789490cb9ffbbba7a8d2ba4a954ff6ac5287f59f2771f951b4f72d16d" Mar 14 06:35:50 crc kubenswrapper[4817]: I0314 06:35:50.732741 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:35:50 crc kubenswrapper[4817]: E0314 06:35:50.734371 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.149211 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557836-z7dhx"] Mar 14 06:36:00 crc kubenswrapper[4817]: E0314 06:36:00.150070 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="registry-server" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.150085 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="registry-server" Mar 14 06:36:00 crc kubenswrapper[4817]: E0314 06:36:00.150110 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="extract-content" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.150116 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="extract-content" Mar 14 06:36:00 crc kubenswrapper[4817]: E0314 06:36:00.150135 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="extract-utilities" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.150142 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="extract-utilities" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.150321 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c16825-70c9-443e-9f99-0aa60dddb7a0" containerName="registry-server" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.151214 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.153394 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.153613 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.153732 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.164302 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557836-z7dhx"] Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.298680 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsbh\" (UniqueName: \"kubernetes.io/projected/a79c516a-5ba1-4bfc-b405-eb94d26a94c3-kube-api-access-dgsbh\") pod \"auto-csr-approver-29557836-z7dhx\" (UID: \"a79c516a-5ba1-4bfc-b405-eb94d26a94c3\") " pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.402410 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsbh\" (UniqueName: \"kubernetes.io/projected/a79c516a-5ba1-4bfc-b405-eb94d26a94c3-kube-api-access-dgsbh\") pod \"auto-csr-approver-29557836-z7dhx\" (UID: \"a79c516a-5ba1-4bfc-b405-eb94d26a94c3\") " pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.455450 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsbh\" (UniqueName: \"kubernetes.io/projected/a79c516a-5ba1-4bfc-b405-eb94d26a94c3-kube-api-access-dgsbh\") pod \"auto-csr-approver-29557836-z7dhx\" (UID: \"a79c516a-5ba1-4bfc-b405-eb94d26a94c3\") " pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.479461 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.924002 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557836-z7dhx"] Mar 14 06:36:00 crc kubenswrapper[4817]: I0314 06:36:00.927830 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:36:01 crc kubenswrapper[4817]: I0314 06:36:01.452014 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" event={"ID":"a79c516a-5ba1-4bfc-b405-eb94d26a94c3","Type":"ContainerStarted","Data":"95ee560fa84336303d3bfae0f0267624f3c6e99ed9bc8f278c96e8bb1b4861e3"} Mar 14 06:36:02 crc kubenswrapper[4817]: I0314 06:36:02.462421 4817 generic.go:334] "Generic (PLEG): container finished" podID="a79c516a-5ba1-4bfc-b405-eb94d26a94c3" containerID="5167e0897499ecbb95fc76037749c2083bb1e339f3083e94fe8deb45e82ebb7c" exitCode=0 Mar 14 06:36:02 crc kubenswrapper[4817]: I0314 06:36:02.462804 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" event={"ID":"a79c516a-5ba1-4bfc-b405-eb94d26a94c3","Type":"ContainerDied","Data":"5167e0897499ecbb95fc76037749c2083bb1e339f3083e94fe8deb45e82ebb7c"} Mar 14 06:36:03 crc kubenswrapper[4817]: I0314 06:36:03.833485 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:03 crc kubenswrapper[4817]: I0314 06:36:03.976716 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgsbh\" (UniqueName: \"kubernetes.io/projected/a79c516a-5ba1-4bfc-b405-eb94d26a94c3-kube-api-access-dgsbh\") pod \"a79c516a-5ba1-4bfc-b405-eb94d26a94c3\" (UID: \"a79c516a-5ba1-4bfc-b405-eb94d26a94c3\") " Mar 14 06:36:03 crc kubenswrapper[4817]: I0314 06:36:03.985200 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79c516a-5ba1-4bfc-b405-eb94d26a94c3-kube-api-access-dgsbh" (OuterVolumeSpecName: "kube-api-access-dgsbh") pod "a79c516a-5ba1-4bfc-b405-eb94d26a94c3" (UID: "a79c516a-5ba1-4bfc-b405-eb94d26a94c3"). InnerVolumeSpecName "kube-api-access-dgsbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.079732 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgsbh\" (UniqueName: \"kubernetes.io/projected/a79c516a-5ba1-4bfc-b405-eb94d26a94c3-kube-api-access-dgsbh\") on node \"crc\" DevicePath \"\"" Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.483933 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" event={"ID":"a79c516a-5ba1-4bfc-b405-eb94d26a94c3","Type":"ContainerDied","Data":"95ee560fa84336303d3bfae0f0267624f3c6e99ed9bc8f278c96e8bb1b4861e3"} Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.483986 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557836-z7dhx" Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.484006 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ee560fa84336303d3bfae0f0267624f3c6e99ed9bc8f278c96e8bb1b4861e3" Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.732210 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:36:04 crc kubenswrapper[4817]: E0314 06:36:04.732654 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.914656 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557830-ndr9n"] Mar 14 06:36:04 crc kubenswrapper[4817]: I0314 06:36:04.927077 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557830-ndr9n"] Mar 14 06:36:06 crc kubenswrapper[4817]: I0314 06:36:06.750596 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34948651-9926-42ca-bdd6-d227b7e7797c" path="/var/lib/kubelet/pods/34948651-9926-42ca-bdd6-d227b7e7797c/volumes" Mar 14 06:36:15 crc kubenswrapper[4817]: I0314 06:36:15.731792 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:36:15 crc kubenswrapper[4817]: E0314 06:36:15.732381 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:36:30 crc kubenswrapper[4817]: I0314 06:36:30.734380 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:36:30 crc kubenswrapper[4817]: E0314 06:36:30.734976 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:36:44 crc kubenswrapper[4817]: I0314 06:36:44.732320 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:36:44 crc kubenswrapper[4817]: E0314 06:36:44.733262 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:36:46 crc kubenswrapper[4817]: I0314 06:36:46.961350 4817 scope.go:117] "RemoveContainer" containerID="9e4d79cbd7e3b0a0a192eac67e4b8318817497a4cec6ed94c802f1f8804ba533" Mar 14 06:36:56 crc kubenswrapper[4817]: I0314 06:36:56.740385 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:36:56 crc kubenswrapper[4817]: E0314 06:36:56.742121 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:37:09 crc kubenswrapper[4817]: I0314 06:37:09.734407 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:37:09 crc kubenswrapper[4817]: E0314 06:37:09.735225 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:37:20 crc kubenswrapper[4817]: I0314 06:37:20.732029 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:37:20 crc kubenswrapper[4817]: E0314 06:37:20.732787 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:37:32 crc kubenswrapper[4817]: I0314 06:37:32.732964 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:37:32 crc kubenswrapper[4817]: E0314 06:37:32.734790 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:37:46 crc kubenswrapper[4817]: I0314 06:37:46.739134 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:37:46 crc kubenswrapper[4817]: E0314 06:37:46.740188 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:37:57 crc kubenswrapper[4817]: I0314 06:37:57.732808 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:37:57 crc kubenswrapper[4817]: E0314 06:37:57.733747 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.150374 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557838-c9nd7"] Mar 14 06:38:00 crc kubenswrapper[4817]: E0314 06:38:00.152644 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79c516a-5ba1-4bfc-b405-eb94d26a94c3" containerName="oc" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.152766 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79c516a-5ba1-4bfc-b405-eb94d26a94c3" containerName="oc" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.153106 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79c516a-5ba1-4bfc-b405-eb94d26a94c3" containerName="oc" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.153925 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.157415 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.157694 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.158023 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.177743 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557838-c9nd7"] Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.275312 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5dpz\" (UniqueName: \"kubernetes.io/projected/3ab959af-c99f-4a90-80f3-2af0891ecdc7-kube-api-access-p5dpz\") pod \"auto-csr-approver-29557838-c9nd7\" (UID: \"3ab959af-c99f-4a90-80f3-2af0891ecdc7\") " pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.378476 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5dpz\" (UniqueName: \"kubernetes.io/projected/3ab959af-c99f-4a90-80f3-2af0891ecdc7-kube-api-access-p5dpz\") pod \"auto-csr-approver-29557838-c9nd7\" (UID: \"3ab959af-c99f-4a90-80f3-2af0891ecdc7\") " pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.401733 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5dpz\" (UniqueName: \"kubernetes.io/projected/3ab959af-c99f-4a90-80f3-2af0891ecdc7-kube-api-access-p5dpz\") pod \"auto-csr-approver-29557838-c9nd7\" (UID: \"3ab959af-c99f-4a90-80f3-2af0891ecdc7\") " pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.478263 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:00 crc kubenswrapper[4817]: I0314 06:38:00.970790 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557838-c9nd7"] Mar 14 06:38:01 crc kubenswrapper[4817]: I0314 06:38:01.565045 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" event={"ID":"3ab959af-c99f-4a90-80f3-2af0891ecdc7","Type":"ContainerStarted","Data":"17e3b6966e28279e8703408d4ee9dcfcb4072b5976702e3458e5d9752d38a45f"} Mar 14 06:38:02 crc kubenswrapper[4817]: I0314 06:38:02.575503 4817 generic.go:334] "Generic (PLEG): container finished" podID="3ab959af-c99f-4a90-80f3-2af0891ecdc7" containerID="62ba521c7c58c0e3b274ae414e2e978d567ee757202b0f6b96a4956b96789080" exitCode=0 Mar 14 06:38:02 crc kubenswrapper[4817]: I0314 06:38:02.575598 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" event={"ID":"3ab959af-c99f-4a90-80f3-2af0891ecdc7","Type":"ContainerDied","Data":"62ba521c7c58c0e3b274ae414e2e978d567ee757202b0f6b96a4956b96789080"} Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.047574 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.153267 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5dpz\" (UniqueName: \"kubernetes.io/projected/3ab959af-c99f-4a90-80f3-2af0891ecdc7-kube-api-access-p5dpz\") pod \"3ab959af-c99f-4a90-80f3-2af0891ecdc7\" (UID: \"3ab959af-c99f-4a90-80f3-2af0891ecdc7\") " Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.162131 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab959af-c99f-4a90-80f3-2af0891ecdc7-kube-api-access-p5dpz" (OuterVolumeSpecName: "kube-api-access-p5dpz") pod "3ab959af-c99f-4a90-80f3-2af0891ecdc7" (UID: "3ab959af-c99f-4a90-80f3-2af0891ecdc7"). InnerVolumeSpecName "kube-api-access-p5dpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.256534 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5dpz\" (UniqueName: \"kubernetes.io/projected/3ab959af-c99f-4a90-80f3-2af0891ecdc7-kube-api-access-p5dpz\") on node \"crc\" DevicePath \"\"" Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.596564 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" event={"ID":"3ab959af-c99f-4a90-80f3-2af0891ecdc7","Type":"ContainerDied","Data":"17e3b6966e28279e8703408d4ee9dcfcb4072b5976702e3458e5d9752d38a45f"} Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.596604 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e3b6966e28279e8703408d4ee9dcfcb4072b5976702e3458e5d9752d38a45f" Mar 14 06:38:04 crc kubenswrapper[4817]: I0314 06:38:04.596642 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557838-c9nd7" Mar 14 06:38:05 crc kubenswrapper[4817]: I0314 06:38:05.133094 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557832-mn6l2"] Mar 14 06:38:05 crc kubenswrapper[4817]: I0314 06:38:05.142695 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557832-mn6l2"] Mar 14 06:38:06 crc kubenswrapper[4817]: I0314 06:38:06.745670 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543ebca0-20cb-43b0-ad71-75dc8128c7b6" path="/var/lib/kubelet/pods/543ebca0-20cb-43b0-ad71-75dc8128c7b6/volumes" Mar 14 06:38:11 crc kubenswrapper[4817]: I0314 06:38:11.732088 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:38:11 crc kubenswrapper[4817]: E0314 06:38:11.734009 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:38:22 crc kubenswrapper[4817]: I0314 06:38:22.731715 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:38:22 crc kubenswrapper[4817]: E0314 06:38:22.732628 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:38:34 crc kubenswrapper[4817]: I0314 06:38:34.732175 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:38:34 crc kubenswrapper[4817]: E0314 06:38:34.733106 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:38:47 crc kubenswrapper[4817]: I0314 06:38:47.086709 4817 scope.go:117] "RemoveContainer" containerID="fb2efde3d9fb4b66c517ed6d3dfb291f0c78915cfcbc2e2d3fa01af8302fe2fe" Mar 14 06:38:47 crc kubenswrapper[4817]: I0314 06:38:47.732446 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:38:47 crc kubenswrapper[4817]: E0314 06:38:47.733071 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:38:58 crc kubenswrapper[4817]: I0314 06:38:58.732640 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:38:58 crc kubenswrapper[4817]: E0314 06:38:58.733508 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:39:11 crc kubenswrapper[4817]: I0314 06:39:11.732331 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:39:11 crc kubenswrapper[4817]: E0314 06:39:11.733179 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:39:22 crc kubenswrapper[4817]: I0314 06:39:22.733006 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:39:22 crc kubenswrapper[4817]: E0314 06:39:22.733960 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:39:34 crc kubenswrapper[4817]: I0314 06:39:34.732448 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:39:34 crc kubenswrapper[4817]: E0314 06:39:34.733292 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.439969 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bwrjr"] Mar 14 06:39:42 crc kubenswrapper[4817]: E0314 06:39:42.440993 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab959af-c99f-4a90-80f3-2af0891ecdc7" containerName="oc" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.441010 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab959af-c99f-4a90-80f3-2af0891ecdc7" containerName="oc" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.441238 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab959af-c99f-4a90-80f3-2af0891ecdc7" containerName="oc" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.442717 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.457707 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwrjr"] Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.496247 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-utilities\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.496306 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sps8p\" (UniqueName: \"kubernetes.io/projected/9760788b-0072-4de5-a145-f2e7ace5ee0c-kube-api-access-sps8p\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.496492 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-catalog-content\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.598198 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-catalog-content\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.598319 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-utilities\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.598359 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sps8p\" (UniqueName: \"kubernetes.io/projected/9760788b-0072-4de5-a145-f2e7ace5ee0c-kube-api-access-sps8p\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.599357 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-catalog-content\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.599627 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-utilities\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.623819 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sps8p\" (UniqueName: \"kubernetes.io/projected/9760788b-0072-4de5-a145-f2e7ace5ee0c-kube-api-access-sps8p\") pod \"redhat-operators-bwrjr\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:42 crc kubenswrapper[4817]: I0314 06:39:42.785593 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:43 crc kubenswrapper[4817]: I0314 06:39:43.261220 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwrjr"] Mar 14 06:39:43 crc kubenswrapper[4817]: I0314 06:39:43.494211 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerStarted","Data":"1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651"} Mar 14 06:39:43 crc kubenswrapper[4817]: I0314 06:39:43.494279 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerStarted","Data":"beb178adce72c982cf966adea0e72c26db3cf1c5aaf12213cfe8b574e16f9204"} Mar 14 06:39:44 crc kubenswrapper[4817]: I0314 06:39:44.506192 4817 generic.go:334] "Generic (PLEG): container finished" podID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerID="1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651" exitCode=0 Mar 14 06:39:44 crc kubenswrapper[4817]: I0314 06:39:44.506388 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerDied","Data":"1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651"} Mar 14 06:39:45 crc kubenswrapper[4817]: I0314 06:39:45.519978 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerStarted","Data":"aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24"} Mar 14 06:39:46 crc kubenswrapper[4817]: I0314 06:39:46.529288 4817 generic.go:334] "Generic (PLEG): container finished" podID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerID="aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24" exitCode=0 Mar 14 06:39:46 crc kubenswrapper[4817]: I0314 06:39:46.529330 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerDied","Data":"aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24"} Mar 14 06:39:46 crc kubenswrapper[4817]: I0314 06:39:46.756026 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:39:46 crc kubenswrapper[4817]: E0314 06:39:46.757084 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:39:47 crc kubenswrapper[4817]: I0314 06:39:47.541275 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerStarted","Data":"1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62"} Mar 14 06:39:47 crc kubenswrapper[4817]: I0314 06:39:47.570562 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bwrjr" podStartSLOduration=3.120015042 podStartE2EDuration="5.570536387s" podCreationTimestamp="2026-03-14 06:39:42 +0000 UTC" firstStartedPulling="2026-03-14 06:39:44.510333987 +0000 UTC m=+4038.548594753" lastFinishedPulling="2026-03-14 06:39:46.960855352 +0000 UTC m=+4040.999116098" observedRunningTime="2026-03-14 06:39:47.558746851 +0000 UTC m=+4041.597007627" watchObservedRunningTime="2026-03-14 06:39:47.570536387 +0000 UTC m=+4041.608797163" Mar 14 06:39:52 crc kubenswrapper[4817]: I0314 06:39:52.785971 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:52 crc kubenswrapper[4817]: I0314 06:39:52.786594 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:39:53 crc kubenswrapper[4817]: I0314 06:39:53.843832 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bwrjr" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="registry-server" probeResult="failure" output=< Mar 14 06:39:53 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 06:39:53 crc kubenswrapper[4817]: > Mar 14 06:39:59 crc kubenswrapper[4817]: I0314 06:39:59.732350 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:39:59 crc kubenswrapper[4817]: E0314 06:39:59.734460 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.177372 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557840-mvz42"] Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.179128 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.182562 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.183522 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.183630 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.188546 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557840-mvz42"] Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.365786 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vpp\" (UniqueName: \"kubernetes.io/projected/56529938-5eac-44ab-b44b-9b0cd6581384-kube-api-access-h2vpp\") pod \"auto-csr-approver-29557840-mvz42\" (UID: \"56529938-5eac-44ab-b44b-9b0cd6581384\") " pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.471602 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vpp\" (UniqueName: \"kubernetes.io/projected/56529938-5eac-44ab-b44b-9b0cd6581384-kube-api-access-h2vpp\") pod \"auto-csr-approver-29557840-mvz42\" (UID: \"56529938-5eac-44ab-b44b-9b0cd6581384\") " pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.491664 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vpp\" (UniqueName: \"kubernetes.io/projected/56529938-5eac-44ab-b44b-9b0cd6581384-kube-api-access-h2vpp\") pod \"auto-csr-approver-29557840-mvz42\" (UID: \"56529938-5eac-44ab-b44b-9b0cd6581384\") " pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:00 crc kubenswrapper[4817]: I0314 06:40:00.516774 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:01 crc kubenswrapper[4817]: I0314 06:40:01.015762 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557840-mvz42"] Mar 14 06:40:01 crc kubenswrapper[4817]: I0314 06:40:01.714690 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557840-mvz42" event={"ID":"56529938-5eac-44ab-b44b-9b0cd6581384","Type":"ContainerStarted","Data":"ed9bc6ca8cf867184388507e1603e9278d8c5a739bc6f0046b9854e0dca2d97f"} Mar 14 06:40:02 crc kubenswrapper[4817]: I0314 06:40:02.842732 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:40:02 crc kubenswrapper[4817]: I0314 06:40:02.894140 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:40:03 crc kubenswrapper[4817]: I0314 06:40:03.092473 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bwrjr"] Mar 14 06:40:03 crc kubenswrapper[4817]: I0314 06:40:03.735488 4817 generic.go:334] "Generic (PLEG): container finished" podID="56529938-5eac-44ab-b44b-9b0cd6581384" containerID="d2cb333d0f88db48d4c4c1d90b1d8c2869626918f9c25190fc3128e4e3d239b0" exitCode=0 Mar 14 06:40:03 crc kubenswrapper[4817]: I0314 06:40:03.735607 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557840-mvz42" event={"ID":"56529938-5eac-44ab-b44b-9b0cd6581384","Type":"ContainerDied","Data":"d2cb333d0f88db48d4c4c1d90b1d8c2869626918f9c25190fc3128e4e3d239b0"} Mar 14 06:40:04 crc kubenswrapper[4817]: I0314 06:40:04.755646 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bwrjr" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="registry-server" containerID="cri-o://1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62" gracePeriod=2 Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.200261 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.340796 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.378594 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2vpp\" (UniqueName: \"kubernetes.io/projected/56529938-5eac-44ab-b44b-9b0cd6581384-kube-api-access-h2vpp\") pod \"56529938-5eac-44ab-b44b-9b0cd6581384\" (UID: \"56529938-5eac-44ab-b44b-9b0cd6581384\") " Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.384690 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56529938-5eac-44ab-b44b-9b0cd6581384-kube-api-access-h2vpp" (OuterVolumeSpecName: "kube-api-access-h2vpp") pod "56529938-5eac-44ab-b44b-9b0cd6581384" (UID: "56529938-5eac-44ab-b44b-9b0cd6581384"). InnerVolumeSpecName "kube-api-access-h2vpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.480516 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-catalog-content\") pod \"9760788b-0072-4de5-a145-f2e7ace5ee0c\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.480709 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sps8p\" (UniqueName: \"kubernetes.io/projected/9760788b-0072-4de5-a145-f2e7ace5ee0c-kube-api-access-sps8p\") pod \"9760788b-0072-4de5-a145-f2e7ace5ee0c\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.480736 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-utilities\") pod \"9760788b-0072-4de5-a145-f2e7ace5ee0c\" (UID: \"9760788b-0072-4de5-a145-f2e7ace5ee0c\") " Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.481262 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2vpp\" (UniqueName: \"kubernetes.io/projected/56529938-5eac-44ab-b44b-9b0cd6581384-kube-api-access-h2vpp\") on node \"crc\" DevicePath \"\"" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.482492 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-utilities" (OuterVolumeSpecName: "utilities") pod "9760788b-0072-4de5-a145-f2e7ace5ee0c" (UID: "9760788b-0072-4de5-a145-f2e7ace5ee0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.487000 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9760788b-0072-4de5-a145-f2e7ace5ee0c-kube-api-access-sps8p" (OuterVolumeSpecName: "kube-api-access-sps8p") pod "9760788b-0072-4de5-a145-f2e7ace5ee0c" (UID: "9760788b-0072-4de5-a145-f2e7ace5ee0c"). InnerVolumeSpecName "kube-api-access-sps8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.583015 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sps8p\" (UniqueName: \"kubernetes.io/projected/9760788b-0072-4de5-a145-f2e7ace5ee0c-kube-api-access-sps8p\") on node \"crc\" DevicePath \"\"" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.583051 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.647322 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9760788b-0072-4de5-a145-f2e7ace5ee0c" (UID: "9760788b-0072-4de5-a145-f2e7ace5ee0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.685954 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9760788b-0072-4de5-a145-f2e7ace5ee0c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.770192 4817 generic.go:334] "Generic (PLEG): container finished" podID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerID="1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62" exitCode=0 Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.770288 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwrjr" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.770325 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerDied","Data":"1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62"} Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.770428 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwrjr" event={"ID":"9760788b-0072-4de5-a145-f2e7ace5ee0c","Type":"ContainerDied","Data":"beb178adce72c982cf966adea0e72c26db3cf1c5aaf12213cfe8b574e16f9204"} Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.770463 4817 scope.go:117] "RemoveContainer" containerID="1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.773081 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557840-mvz42" event={"ID":"56529938-5eac-44ab-b44b-9b0cd6581384","Type":"ContainerDied","Data":"ed9bc6ca8cf867184388507e1603e9278d8c5a739bc6f0046b9854e0dca2d97f"} Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.773140 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9bc6ca8cf867184388507e1603e9278d8c5a739bc6f0046b9854e0dca2d97f" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.773197 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557840-mvz42" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.818261 4817 scope.go:117] "RemoveContainer" containerID="aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.824042 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bwrjr"] Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.832833 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bwrjr"] Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.860415 4817 scope.go:117] "RemoveContainer" containerID="1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.968023 4817 scope.go:117] "RemoveContainer" containerID="1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62" Mar 14 06:40:05 crc kubenswrapper[4817]: E0314 06:40:05.979711 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62\": container with ID starting with 1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62 not found: ID does not exist" containerID="1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.979785 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62"} err="failed to get container status \"1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62\": rpc error: code = NotFound desc = could not find container \"1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62\": container with ID starting with 1bc10534e84c716efe792ce9a4c333308633506c83f97cbf4bbea4ebcac49a62 not found: ID does not exist" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.979829 4817 scope.go:117] "RemoveContainer" containerID="aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24" Mar 14 06:40:05 crc kubenswrapper[4817]: E0314 06:40:05.980227 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24\": container with ID starting with aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24 not found: ID does not exist" containerID="aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.980266 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24"} err="failed to get container status \"aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24\": rpc error: code = NotFound desc = could not find container \"aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24\": container with ID starting with aead7e40a30b7f3ef91d245e1dbbe3ed084a1105695bf14d961c891146ec6a24 not found: ID does not exist" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.980292 4817 scope.go:117] "RemoveContainer" containerID="1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651" Mar 14 06:40:05 crc kubenswrapper[4817]: E0314 06:40:05.980726 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651\": container with ID starting with 1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651 not found: ID does not exist" containerID="1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651" Mar 14 06:40:05 crc kubenswrapper[4817]: I0314 06:40:05.980884 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651"} err="failed to get container status \"1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651\": rpc error: code = NotFound desc = could not find container \"1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651\": container with ID starting with 1a6e336f6e969b692a0070b4514fd8fac5d94749de8f25ab6fce44b50fb7b651 not found: ID does not exist" Mar 14 06:40:06 crc kubenswrapper[4817]: I0314 06:40:06.324052 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557834-sw67r"] Mar 14 06:40:06 crc kubenswrapper[4817]: I0314 06:40:06.333544 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557834-sw67r"] Mar 14 06:40:06 crc kubenswrapper[4817]: I0314 06:40:06.753697 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8140bc3d-41a5-499c-b797-0e6d19addd0b" path="/var/lib/kubelet/pods/8140bc3d-41a5-499c-b797-0e6d19addd0b/volumes" Mar 14 06:40:06 crc kubenswrapper[4817]: I0314 06:40:06.755036 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" path="/var/lib/kubelet/pods/9760788b-0072-4de5-a145-f2e7ace5ee0c/volumes" Mar 14 06:40:10 crc kubenswrapper[4817]: I0314 06:40:10.733748 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:40:11 crc kubenswrapper[4817]: I0314 06:40:11.849579 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"70880caa4a057b35883c3ab3eebd0dbd099513e1fdd2a5a5f0cfa99e7431ada4"} Mar 14 06:40:47 crc kubenswrapper[4817]: I0314 06:40:47.184675 4817 scope.go:117] "RemoveContainer" containerID="9fdd7f87f1ad3f5e2b3a4febe518b45e91aef26cb3a72cd3ff9034dc1fa23215" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.837042 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4j9lw"] Mar 14 06:41:17 crc kubenswrapper[4817]: E0314 06:41:17.853599 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="extract-content" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.853632 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="extract-content" Mar 14 06:41:17 crc kubenswrapper[4817]: E0314 06:41:17.853666 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="registry-server" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.853676 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="registry-server" Mar 14 06:41:17 crc kubenswrapper[4817]: E0314 06:41:17.853698 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="extract-utilities" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.853707 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="extract-utilities" Mar 14 06:41:17 crc kubenswrapper[4817]: E0314 06:41:17.853729 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56529938-5eac-44ab-b44b-9b0cd6581384" containerName="oc" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.853737 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="56529938-5eac-44ab-b44b-9b0cd6581384" containerName="oc" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.853984 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="56529938-5eac-44ab-b44b-9b0cd6581384" containerName="oc" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.854007 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9760788b-0072-4de5-a145-f2e7ace5ee0c" containerName="registry-server" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.856422 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.861366 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4j9lw"] Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.981129 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-utilities\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.981621 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffgf\" (UniqueName: \"kubernetes.io/projected/ee8cae23-674a-45b3-974b-5bda3e400ba0-kube-api-access-pffgf\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:17 crc kubenswrapper[4817]: I0314 06:41:17.981664 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-catalog-content\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.083846 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pffgf\" (UniqueName: \"kubernetes.io/projected/ee8cae23-674a-45b3-974b-5bda3e400ba0-kube-api-access-pffgf\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.083929 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-catalog-content\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.084045 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-utilities\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.084698 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-utilities\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.084847 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-catalog-content\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.105448 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffgf\" (UniqueName: \"kubernetes.io/projected/ee8cae23-674a-45b3-974b-5bda3e400ba0-kube-api-access-pffgf\") pod \"redhat-marketplace-4j9lw\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.191459 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:18 crc kubenswrapper[4817]: I0314 06:41:18.682484 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4j9lw"] Mar 14 06:41:19 crc kubenswrapper[4817]: I0314 06:41:19.528947 4817 generic.go:334] "Generic (PLEG): container finished" podID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerID="2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba" exitCode=0 Mar 14 06:41:19 crc kubenswrapper[4817]: I0314 06:41:19.530104 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerDied","Data":"2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba"} Mar 14 06:41:19 crc kubenswrapper[4817]: I0314 06:41:19.530198 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerStarted","Data":"6ee6a0f9ecdb28d6a4b5adbe4f264ece543815ae83debc9cb1747221aa1e777c"} Mar 14 06:41:19 crc kubenswrapper[4817]: I0314 06:41:19.534069 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:41:20 crc kubenswrapper[4817]: I0314 06:41:20.544698 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerStarted","Data":"a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4"} Mar 14 06:41:21 crc kubenswrapper[4817]: I0314 06:41:21.558318 4817 generic.go:334] "Generic (PLEG): container finished" podID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerID="a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4" exitCode=0 Mar 14 06:41:21 crc kubenswrapper[4817]: I0314 06:41:21.558673 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerDied","Data":"a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4"} Mar 14 06:41:22 crc kubenswrapper[4817]: I0314 06:41:22.570069 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerStarted","Data":"e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3"} Mar 14 06:41:28 crc kubenswrapper[4817]: I0314 06:41:28.192728 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:28 crc kubenswrapper[4817]: I0314 06:41:28.193665 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:28 crc kubenswrapper[4817]: I0314 06:41:28.263441 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:28 crc kubenswrapper[4817]: I0314 06:41:28.286978 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4j9lw" podStartSLOduration=8.882684861 podStartE2EDuration="11.286956699s" podCreationTimestamp="2026-03-14 06:41:17 +0000 UTC" firstStartedPulling="2026-03-14 06:41:19.533852214 +0000 UTC m=+4133.572112960" lastFinishedPulling="2026-03-14 06:41:21.938124052 +0000 UTC m=+4135.976384798" observedRunningTime="2026-03-14 06:41:22.594693172 +0000 UTC m=+4136.632953928" watchObservedRunningTime="2026-03-14 06:41:28.286956699 +0000 UTC m=+4142.325217445" Mar 14 06:41:28 crc kubenswrapper[4817]: I0314 06:41:28.694968 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:28 crc kubenswrapper[4817]: I0314 06:41:28.757953 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4j9lw"] Mar 14 06:41:30 crc kubenswrapper[4817]: I0314 06:41:30.641496 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4j9lw" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="registry-server" containerID="cri-o://e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3" gracePeriod=2 Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.135198 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.305020 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-utilities\") pod \"ee8cae23-674a-45b3-974b-5bda3e400ba0\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.305499 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pffgf\" (UniqueName: \"kubernetes.io/projected/ee8cae23-674a-45b3-974b-5bda3e400ba0-kube-api-access-pffgf\") pod \"ee8cae23-674a-45b3-974b-5bda3e400ba0\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.305582 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-catalog-content\") pod \"ee8cae23-674a-45b3-974b-5bda3e400ba0\" (UID: \"ee8cae23-674a-45b3-974b-5bda3e400ba0\") " Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.305810 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-utilities" (OuterVolumeSpecName: "utilities") pod "ee8cae23-674a-45b3-974b-5bda3e400ba0" (UID: "ee8cae23-674a-45b3-974b-5bda3e400ba0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.310728 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.316636 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8cae23-674a-45b3-974b-5bda3e400ba0-kube-api-access-pffgf" (OuterVolumeSpecName: "kube-api-access-pffgf") pod "ee8cae23-674a-45b3-974b-5bda3e400ba0" (UID: "ee8cae23-674a-45b3-974b-5bda3e400ba0"). InnerVolumeSpecName "kube-api-access-pffgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.333362 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee8cae23-674a-45b3-974b-5bda3e400ba0" (UID: "ee8cae23-674a-45b3-974b-5bda3e400ba0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.413083 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pffgf\" (UniqueName: \"kubernetes.io/projected/ee8cae23-674a-45b3-974b-5bda3e400ba0-kube-api-access-pffgf\") on node \"crc\" DevicePath \"\"" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.413117 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8cae23-674a-45b3-974b-5bda3e400ba0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.658180 4817 generic.go:334] "Generic (PLEG): container finished" podID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerID="e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3" exitCode=0 Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.658246 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerDied","Data":"e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3"} Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.658279 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4j9lw" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.658315 4817 scope.go:117] "RemoveContainer" containerID="e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.658297 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4j9lw" event={"ID":"ee8cae23-674a-45b3-974b-5bda3e400ba0","Type":"ContainerDied","Data":"6ee6a0f9ecdb28d6a4b5adbe4f264ece543815ae83debc9cb1747221aa1e777c"} Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.694054 4817 scope.go:117] "RemoveContainer" containerID="a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.733781 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4j9lw"] Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.741759 4817 scope.go:117] "RemoveContainer" containerID="2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.744294 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4j9lw"] Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.790685 4817 scope.go:117] "RemoveContainer" containerID="e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3" Mar 14 06:41:31 crc kubenswrapper[4817]: E0314 06:41:31.791471 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3\": container with ID starting with e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3 not found: ID does not exist" containerID="e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.791519 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3"} err="failed to get container status \"e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3\": rpc error: code = NotFound desc = could not find container \"e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3\": container with ID starting with e15fa7cf22d0dec132a0788080856f63a8f165c046d5e6f94974b88b8bd9e9f3 not found: ID does not exist" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.791549 4817 scope.go:117] "RemoveContainer" containerID="a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4" Mar 14 06:41:31 crc kubenswrapper[4817]: E0314 06:41:31.792340 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4\": container with ID starting with a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4 not found: ID does not exist" containerID="a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.792378 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4"} err="failed to get container status \"a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4\": rpc error: code = NotFound desc = could not find container \"a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4\": container with ID starting with a7a78773d15b67c9cc0e03150847e2627be8d3af68fc58befffda7ea6d6067e4 not found: ID does not exist" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.792403 4817 scope.go:117] "RemoveContainer" containerID="2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba" Mar 14 06:41:31 crc kubenswrapper[4817]: E0314 06:41:31.792732 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba\": container with ID starting with 2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba not found: ID does not exist" containerID="2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba" Mar 14 06:41:31 crc kubenswrapper[4817]: I0314 06:41:31.792757 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba"} err="failed to get container status \"2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba\": rpc error: code = NotFound desc = could not find container \"2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba\": container with ID starting with 2e8ad5dc60082dd291176879f7652994585dedf5f5936a43664076e108cae7ba not found: ID does not exist" Mar 14 06:41:32 crc kubenswrapper[4817]: I0314 06:41:32.743819 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" path="/var/lib/kubelet/pods/ee8cae23-674a-45b3-974b-5bda3e400ba0/volumes" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.912930 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j692m"] Mar 14 06:41:33 crc kubenswrapper[4817]: E0314 06:41:33.913359 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="extract-utilities" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.913371 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="extract-utilities" Mar 14 06:41:33 crc kubenswrapper[4817]: E0314 06:41:33.913393 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="registry-server" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.913399 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="registry-server" Mar 14 06:41:33 crc kubenswrapper[4817]: E0314 06:41:33.913418 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="extract-content" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.913427 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="extract-content" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.913660 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8cae23-674a-45b3-974b-5bda3e400ba0" containerName="registry-server" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.915560 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:33 crc kubenswrapper[4817]: I0314 06:41:33.924518 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j692m"] Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.071222 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2x2d\" (UniqueName: \"kubernetes.io/projected/3af12190-b1ec-4230-89bf-0bd27fec5abc-kube-api-access-l2x2d\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.071295 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-utilities\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.071412 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-catalog-content\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.173588 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-catalog-content\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.173684 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2x2d\" (UniqueName: \"kubernetes.io/projected/3af12190-b1ec-4230-89bf-0bd27fec5abc-kube-api-access-l2x2d\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.173727 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-utilities\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.174185 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-utilities\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.174473 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-catalog-content\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.197051 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2x2d\" (UniqueName: \"kubernetes.io/projected/3af12190-b1ec-4230-89bf-0bd27fec5abc-kube-api-access-l2x2d\") pod \"community-operators-j692m\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.243225 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:34 crc kubenswrapper[4817]: I0314 06:41:34.790742 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j692m"] Mar 14 06:41:34 crc kubenswrapper[4817]: W0314 06:41:34.794872 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3af12190_b1ec_4230_89bf_0bd27fec5abc.slice/crio-813e642ee6cdd470bd635087b7208ffc6b0f5c5615df39a41874b5f3b7f42926 WatchSource:0}: Error finding container 813e642ee6cdd470bd635087b7208ffc6b0f5c5615df39a41874b5f3b7f42926: Status 404 returned error can't find the container with id 813e642ee6cdd470bd635087b7208ffc6b0f5c5615df39a41874b5f3b7f42926 Mar 14 06:41:35 crc kubenswrapper[4817]: I0314 06:41:35.705210 4817 generic.go:334] "Generic (PLEG): container finished" podID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerID="bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832" exitCode=0 Mar 14 06:41:35 crc kubenswrapper[4817]: I0314 06:41:35.705281 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerDied","Data":"bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832"} Mar 14 06:41:35 crc kubenswrapper[4817]: I0314 06:41:35.708139 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerStarted","Data":"813e642ee6cdd470bd635087b7208ffc6b0f5c5615df39a41874b5f3b7f42926"} Mar 14 06:41:36 crc kubenswrapper[4817]: I0314 06:41:36.768838 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerStarted","Data":"a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3"} Mar 14 06:41:38 crc kubenswrapper[4817]: I0314 06:41:38.759819 4817 generic.go:334] "Generic (PLEG): container finished" podID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerID="a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3" exitCode=0 Mar 14 06:41:38 crc kubenswrapper[4817]: I0314 06:41:38.759960 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerDied","Data":"a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3"} Mar 14 06:41:39 crc kubenswrapper[4817]: I0314 06:41:39.770721 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerStarted","Data":"b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f"} Mar 14 06:41:39 crc kubenswrapper[4817]: I0314 06:41:39.799270 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j692m" podStartSLOduration=3.315826536 podStartE2EDuration="6.799252231s" podCreationTimestamp="2026-03-14 06:41:33 +0000 UTC" firstStartedPulling="2026-03-14 06:41:35.707886671 +0000 UTC m=+4149.746147417" lastFinishedPulling="2026-03-14 06:41:39.191312336 +0000 UTC m=+4153.229573112" observedRunningTime="2026-03-14 06:41:39.791066358 +0000 UTC m=+4153.829327104" watchObservedRunningTime="2026-03-14 06:41:39.799252231 +0000 UTC m=+4153.837512977" Mar 14 06:41:44 crc kubenswrapper[4817]: I0314 06:41:44.243719 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:44 crc kubenswrapper[4817]: I0314 06:41:44.246134 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:44 crc kubenswrapper[4817]: I0314 06:41:44.323219 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:44 crc kubenswrapper[4817]: I0314 06:41:44.890443 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:44 crc kubenswrapper[4817]: I0314 06:41:44.942025 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j692m"] Mar 14 06:41:46 crc kubenswrapper[4817]: I0314 06:41:46.838036 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j692m" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="registry-server" containerID="cri-o://b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f" gracePeriod=2 Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.432110 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.495665 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-utilities\") pod \"3af12190-b1ec-4230-89bf-0bd27fec5abc\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.495759 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2x2d\" (UniqueName: \"kubernetes.io/projected/3af12190-b1ec-4230-89bf-0bd27fec5abc-kube-api-access-l2x2d\") pod \"3af12190-b1ec-4230-89bf-0bd27fec5abc\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.495799 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-catalog-content\") pod \"3af12190-b1ec-4230-89bf-0bd27fec5abc\" (UID: \"3af12190-b1ec-4230-89bf-0bd27fec5abc\") " Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.496838 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-utilities" (OuterVolumeSpecName: "utilities") pod "3af12190-b1ec-4230-89bf-0bd27fec5abc" (UID: "3af12190-b1ec-4230-89bf-0bd27fec5abc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.497097 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.502802 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af12190-b1ec-4230-89bf-0bd27fec5abc-kube-api-access-l2x2d" (OuterVolumeSpecName: "kube-api-access-l2x2d") pod "3af12190-b1ec-4230-89bf-0bd27fec5abc" (UID: "3af12190-b1ec-4230-89bf-0bd27fec5abc"). InnerVolumeSpecName "kube-api-access-l2x2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.553221 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3af12190-b1ec-4230-89bf-0bd27fec5abc" (UID: "3af12190-b1ec-4230-89bf-0bd27fec5abc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.597991 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af12190-b1ec-4230-89bf-0bd27fec5abc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.598040 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2x2d\" (UniqueName: \"kubernetes.io/projected/3af12190-b1ec-4230-89bf-0bd27fec5abc-kube-api-access-l2x2d\") on node \"crc\" DevicePath \"\"" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.848981 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j692m" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.849000 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerDied","Data":"b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f"} Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.849565 4817 scope.go:117] "RemoveContainer" containerID="b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.848969 4817 generic.go:334] "Generic (PLEG): container finished" podID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerID="b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f" exitCode=0 Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.849773 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j692m" event={"ID":"3af12190-b1ec-4230-89bf-0bd27fec5abc","Type":"ContainerDied","Data":"813e642ee6cdd470bd635087b7208ffc6b0f5c5615df39a41874b5f3b7f42926"} Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.893690 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j692m"] Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.894542 4817 scope.go:117] "RemoveContainer" containerID="a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.902510 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j692m"] Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.914820 4817 scope.go:117] "RemoveContainer" containerID="bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.967190 4817 scope.go:117] "RemoveContainer" containerID="b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f" Mar 14 06:41:47 crc kubenswrapper[4817]: E0314 06:41:47.967524 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f\": container with ID starting with b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f not found: ID does not exist" containerID="b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.967558 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f"} err="failed to get container status \"b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f\": rpc error: code = NotFound desc = could not find container \"b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f\": container with ID starting with b1c6864ef827e8d019f60d27843e9571f5c6f0988814e05aac5128a11cc31b7f not found: ID does not exist" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.967580 4817 scope.go:117] "RemoveContainer" containerID="a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3" Mar 14 06:41:47 crc kubenswrapper[4817]: E0314 06:41:47.967841 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3\": container with ID starting with a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3 not found: ID does not exist" containerID="a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.967870 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3"} err="failed to get container status \"a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3\": rpc error: code = NotFound desc = could not find container \"a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3\": container with ID starting with a8bfe2b2173b38e0edc50f994afde4b4e6ff9d2da9b49e91101c07dcff183ee3 not found: ID does not exist" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.967903 4817 scope.go:117] "RemoveContainer" containerID="bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832" Mar 14 06:41:47 crc kubenswrapper[4817]: E0314 06:41:47.968293 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832\": container with ID starting with bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832 not found: ID does not exist" containerID="bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832" Mar 14 06:41:47 crc kubenswrapper[4817]: I0314 06:41:47.968319 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832"} err="failed to get container status \"bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832\": rpc error: code = NotFound desc = could not find container \"bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832\": container with ID starting with bddc9b1382f90944b55e02c45b5d67d7182a9522e2302ec3b16bd8fc878ac832 not found: ID does not exist" Mar 14 06:41:48 crc kubenswrapper[4817]: I0314 06:41:48.743511 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" path="/var/lib/kubelet/pods/3af12190-b1ec-4230-89bf-0bd27fec5abc/volumes" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.148962 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557842-glgk9"] Mar 14 06:42:00 crc kubenswrapper[4817]: E0314 06:42:00.150052 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="extract-content" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.150069 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="extract-content" Mar 14 06:42:00 crc kubenswrapper[4817]: E0314 06:42:00.150102 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="extract-utilities" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.150111 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="extract-utilities" Mar 14 06:42:00 crc kubenswrapper[4817]: E0314 06:42:00.150131 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="registry-server" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.150140 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="registry-server" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.150373 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af12190-b1ec-4230-89bf-0bd27fec5abc" containerName="registry-server" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.151235 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.154332 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.154651 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.155003 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.162386 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557842-glgk9"] Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.268302 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfpcq\" (UniqueName: \"kubernetes.io/projected/7e3db2c6-b136-4a92-bf84-ec29d8121c48-kube-api-access-bfpcq\") pod \"auto-csr-approver-29557842-glgk9\" (UID: \"7e3db2c6-b136-4a92-bf84-ec29d8121c48\") " pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.370855 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfpcq\" (UniqueName: \"kubernetes.io/projected/7e3db2c6-b136-4a92-bf84-ec29d8121c48-kube-api-access-bfpcq\") pod \"auto-csr-approver-29557842-glgk9\" (UID: \"7e3db2c6-b136-4a92-bf84-ec29d8121c48\") " pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.391780 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfpcq\" (UniqueName: \"kubernetes.io/projected/7e3db2c6-b136-4a92-bf84-ec29d8121c48-kube-api-access-bfpcq\") pod \"auto-csr-approver-29557842-glgk9\" (UID: \"7e3db2c6-b136-4a92-bf84-ec29d8121c48\") " pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.491702 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.964257 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557842-glgk9"] Mar 14 06:42:00 crc kubenswrapper[4817]: I0314 06:42:00.988407 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557842-glgk9" event={"ID":"7e3db2c6-b136-4a92-bf84-ec29d8121c48","Type":"ContainerStarted","Data":"c8ef5ac799a08662e8bf8a7809e06c7b46694443a2a9f0a78edf3f2ccd2d3553"} Mar 14 06:42:03 crc kubenswrapper[4817]: I0314 06:42:03.020295 4817 generic.go:334] "Generic (PLEG): container finished" podID="7e3db2c6-b136-4a92-bf84-ec29d8121c48" containerID="472393ed2bac0eb961abdf7f39efd4e01b4c6af9ece8d9c7d036b9c4d4b1bc98" exitCode=0 Mar 14 06:42:03 crc kubenswrapper[4817]: I0314 06:42:03.020437 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557842-glgk9" event={"ID":"7e3db2c6-b136-4a92-bf84-ec29d8121c48","Type":"ContainerDied","Data":"472393ed2bac0eb961abdf7f39efd4e01b4c6af9ece8d9c7d036b9c4d4b1bc98"} Mar 14 06:42:04 crc kubenswrapper[4817]: I0314 06:42:04.418525 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:04 crc kubenswrapper[4817]: I0314 06:42:04.573983 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfpcq\" (UniqueName: \"kubernetes.io/projected/7e3db2c6-b136-4a92-bf84-ec29d8121c48-kube-api-access-bfpcq\") pod \"7e3db2c6-b136-4a92-bf84-ec29d8121c48\" (UID: \"7e3db2c6-b136-4a92-bf84-ec29d8121c48\") " Mar 14 06:42:04 crc kubenswrapper[4817]: I0314 06:42:04.589031 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3db2c6-b136-4a92-bf84-ec29d8121c48-kube-api-access-bfpcq" (OuterVolumeSpecName: "kube-api-access-bfpcq") pod "7e3db2c6-b136-4a92-bf84-ec29d8121c48" (UID: "7e3db2c6-b136-4a92-bf84-ec29d8121c48"). InnerVolumeSpecName "kube-api-access-bfpcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:42:04 crc kubenswrapper[4817]: I0314 06:42:04.676558 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfpcq\" (UniqueName: \"kubernetes.io/projected/7e3db2c6-b136-4a92-bf84-ec29d8121c48-kube-api-access-bfpcq\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:05 crc kubenswrapper[4817]: I0314 06:42:05.046310 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557842-glgk9" event={"ID":"7e3db2c6-b136-4a92-bf84-ec29d8121c48","Type":"ContainerDied","Data":"c8ef5ac799a08662e8bf8a7809e06c7b46694443a2a9f0a78edf3f2ccd2d3553"} Mar 14 06:42:05 crc kubenswrapper[4817]: I0314 06:42:05.046360 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ef5ac799a08662e8bf8a7809e06c7b46694443a2a9f0a78edf3f2ccd2d3553" Mar 14 06:42:05 crc kubenswrapper[4817]: I0314 06:42:05.046363 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557842-glgk9" Mar 14 06:42:05 crc kubenswrapper[4817]: I0314 06:42:05.500158 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557836-z7dhx"] Mar 14 06:42:05 crc kubenswrapper[4817]: I0314 06:42:05.511764 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557836-z7dhx"] Mar 14 06:42:06 crc kubenswrapper[4817]: I0314 06:42:06.744056 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79c516a-5ba1-4bfc-b405-eb94d26a94c3" path="/var/lib/kubelet/pods/a79c516a-5ba1-4bfc-b405-eb94d26a94c3/volumes" Mar 14 06:42:21 crc kubenswrapper[4817]: I0314 06:42:21.195333 4817 generic.go:334] "Generic (PLEG): container finished" podID="106079f9-3258-4c46-8ef4-1811c407fc69" containerID="3201285d70c77eb8608027fc6155f585afa22132c28d6742216f8c0cc5526ed6" exitCode=0 Mar 14 06:42:21 crc kubenswrapper[4817]: I0314 06:42:21.195454 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"106079f9-3258-4c46-8ef4-1811c407fc69","Type":"ContainerDied","Data":"3201285d70c77eb8608027fc6155f585afa22132c28d6742216f8c0cc5526ed6"} Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.648376 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754216 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-config-data\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754685 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config-secret\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754716 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6c2s\" (UniqueName: \"kubernetes.io/projected/106079f9-3258-4c46-8ef4-1811c407fc69-kube-api-access-p6c2s\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754763 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ca-certs\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754846 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ssh-key\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754879 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-temporary\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754929 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.754958 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.755016 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-workdir\") pod \"106079f9-3258-4c46-8ef4-1811c407fc69\" (UID: \"106079f9-3258-4c46-8ef4-1811c407fc69\") " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.755696 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.756127 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-config-data" (OuterVolumeSpecName: "config-data") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.762631 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.773518 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106079f9-3258-4c46-8ef4-1811c407fc69-kube-api-access-p6c2s" (OuterVolumeSpecName: "kube-api-access-p6c2s") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "kube-api-access-p6c2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.776517 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.787399 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.795502 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.808345 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.812788 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "106079f9-3258-4c46-8ef4-1811c407fc69" (UID: "106079f9-3258-4c46-8ef4-1811c407fc69"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857418 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857451 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6c2s\" (UniqueName: \"kubernetes.io/projected/106079f9-3258-4c46-8ef4-1811c407fc69-kube-api-access-p6c2s\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857463 4817 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857473 4817 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/106079f9-3258-4c46-8ef4-1811c407fc69-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857485 4817 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857497 4817 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857523 4817 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857535 4817 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/106079f9-3258-4c46-8ef4-1811c407fc69-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.857546 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106079f9-3258-4c46-8ef4-1811c407fc69-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.880271 4817 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 14 06:42:22 crc kubenswrapper[4817]: I0314 06:42:22.959721 4817 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 14 06:42:23 crc kubenswrapper[4817]: I0314 06:42:23.220205 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"106079f9-3258-4c46-8ef4-1811c407fc69","Type":"ContainerDied","Data":"5d181735afd4cdcde3c880668d7d68ded31bdccc2eeec44eff4953db5040a8f5"} Mar 14 06:42:23 crc kubenswrapper[4817]: I0314 06:42:23.220267 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d181735afd4cdcde3c880668d7d68ded31bdccc2eeec44eff4953db5040a8f5" Mar 14 06:42:23 crc kubenswrapper[4817]: I0314 06:42:23.220361 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.080432 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 14 06:42:29 crc kubenswrapper[4817]: E0314 06:42:29.082587 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3db2c6-b136-4a92-bf84-ec29d8121c48" containerName="oc" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.082604 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3db2c6-b136-4a92-bf84-ec29d8121c48" containerName="oc" Mar 14 06:42:29 crc kubenswrapper[4817]: E0314 06:42:29.082633 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106079f9-3258-4c46-8ef4-1811c407fc69" containerName="tempest-tests-tempest-tests-runner" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.082641 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="106079f9-3258-4c46-8ef4-1811c407fc69" containerName="tempest-tests-tempest-tests-runner" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.082809 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3db2c6-b136-4a92-bf84-ec29d8121c48" containerName="oc" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.082825 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="106079f9-3258-4c46-8ef4-1811c407fc69" containerName="tempest-tests-tempest-tests-runner" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.083471 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.090853 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dvggb" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.092243 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.184879 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7p9\" (UniqueName: \"kubernetes.io/projected/068e1487-9973-4256-8646-2ef08528eeda-kube-api-access-gz7p9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.184974 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.286798 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz7p9\" (UniqueName: \"kubernetes.io/projected/068e1487-9973-4256-8646-2ef08528eeda-kube-api-access-gz7p9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.287624 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.287994 4817 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.307722 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz7p9\" (UniqueName: \"kubernetes.io/projected/068e1487-9973-4256-8646-2ef08528eeda-kube-api-access-gz7p9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.315886 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"068e1487-9973-4256-8646-2ef08528eeda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.409647 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 14 06:42:29 crc kubenswrapper[4817]: I0314 06:42:29.860974 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 14 06:42:30 crc kubenswrapper[4817]: I0314 06:42:30.282218 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"068e1487-9973-4256-8646-2ef08528eeda","Type":"ContainerStarted","Data":"d64ca988b72df2de6433ac16b876fe2114831b36f076ee3e68c2b46f3bd087bb"} Mar 14 06:42:31 crc kubenswrapper[4817]: I0314 06:42:31.294612 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"068e1487-9973-4256-8646-2ef08528eeda","Type":"ContainerStarted","Data":"080e7fbbf985dd89910a2bd8ab440733ffb315a32ad4b9e7254381c136697c84"} Mar 14 06:42:31 crc kubenswrapper[4817]: I0314 06:42:31.320299 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.4438572760000001 podStartE2EDuration="2.320272788s" podCreationTimestamp="2026-03-14 06:42:29 +0000 UTC" firstStartedPulling="2026-03-14 06:42:29.857086373 +0000 UTC m=+4203.895347119" lastFinishedPulling="2026-03-14 06:42:30.733501885 +0000 UTC m=+4204.771762631" observedRunningTime="2026-03-14 06:42:31.312868097 +0000 UTC m=+4205.351128863" watchObservedRunningTime="2026-03-14 06:42:31.320272788 +0000 UTC m=+4205.358533534" Mar 14 06:42:38 crc kubenswrapper[4817]: I0314 06:42:38.565437 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:42:38 crc kubenswrapper[4817]: I0314 06:42:38.566011 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:42:47 crc kubenswrapper[4817]: I0314 06:42:47.331529 4817 scope.go:117] "RemoveContainer" containerID="5167e0897499ecbb95fc76037749c2083bb1e339f3083e94fe8deb45e82ebb7c" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.248538 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r5dsk/must-gather-lmv7f"] Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.250929 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.252959 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-r5dsk"/"openshift-service-ca.crt" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.253218 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-r5dsk"/"default-dockercfg-c7rwp" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.254323 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-r5dsk"/"kube-root-ca.crt" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.258411 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-r5dsk/must-gather-lmv7f"] Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.334859 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bg24\" (UniqueName: \"kubernetes.io/projected/3da5f098-d875-4421-97c1-f1a445fe18ea-kube-api-access-4bg24\") pod \"must-gather-lmv7f\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.335275 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3da5f098-d875-4421-97c1-f1a445fe18ea-must-gather-output\") pod \"must-gather-lmv7f\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.437480 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3da5f098-d875-4421-97c1-f1a445fe18ea-must-gather-output\") pod \"must-gather-lmv7f\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.437590 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bg24\" (UniqueName: \"kubernetes.io/projected/3da5f098-d875-4421-97c1-f1a445fe18ea-kube-api-access-4bg24\") pod \"must-gather-lmv7f\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.438128 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3da5f098-d875-4421-97c1-f1a445fe18ea-must-gather-output\") pod \"must-gather-lmv7f\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.461980 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bg24\" (UniqueName: \"kubernetes.io/projected/3da5f098-d875-4421-97c1-f1a445fe18ea-kube-api-access-4bg24\") pod \"must-gather-lmv7f\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:54 crc kubenswrapper[4817]: I0314 06:42:54.570527 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:42:55 crc kubenswrapper[4817]: I0314 06:42:55.256760 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-r5dsk/must-gather-lmv7f"] Mar 14 06:42:55 crc kubenswrapper[4817]: I0314 06:42:55.544861 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" event={"ID":"3da5f098-d875-4421-97c1-f1a445fe18ea","Type":"ContainerStarted","Data":"2a874b95a5af7eb0f907d7c1057676dae039c6cb7247c7a8bc861ea412452c28"} Mar 14 06:43:02 crc kubenswrapper[4817]: I0314 06:43:02.633543 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" event={"ID":"3da5f098-d875-4421-97c1-f1a445fe18ea","Type":"ContainerStarted","Data":"1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03"} Mar 14 06:43:02 crc kubenswrapper[4817]: I0314 06:43:02.634096 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" event={"ID":"3da5f098-d875-4421-97c1-f1a445fe18ea","Type":"ContainerStarted","Data":"7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637"} Mar 14 06:43:02 crc kubenswrapper[4817]: I0314 06:43:02.657812 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" podStartSLOduration=2.47882505 podStartE2EDuration="8.657785279s" podCreationTimestamp="2026-03-14 06:42:54 +0000 UTC" firstStartedPulling="2026-03-14 06:42:55.26504404 +0000 UTC m=+4229.303304786" lastFinishedPulling="2026-03-14 06:43:01.444004279 +0000 UTC m=+4235.482265015" observedRunningTime="2026-03-14 06:43:02.64937091 +0000 UTC m=+4236.687631666" watchObservedRunningTime="2026-03-14 06:43:02.657785279 +0000 UTC m=+4236.696046025" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.107945 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-hs24x"] Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.109642 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.222700 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqp6b\" (UniqueName: \"kubernetes.io/projected/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-kube-api-access-cqp6b\") pod \"crc-debug-hs24x\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.223103 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-host\") pod \"crc-debug-hs24x\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.325628 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-host\") pod \"crc-debug-hs24x\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.325757 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-host\") pod \"crc-debug-hs24x\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.326670 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqp6b\" (UniqueName: \"kubernetes.io/projected/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-kube-api-access-cqp6b\") pod \"crc-debug-hs24x\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.346951 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqp6b\" (UniqueName: \"kubernetes.io/projected/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-kube-api-access-cqp6b\") pod \"crc-debug-hs24x\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.429279 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:43:06 crc kubenswrapper[4817]: W0314 06:43:06.468586 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ed90032_69ba_4333_bc2b_e875ecd2cbf9.slice/crio-1166c4abf433c9fb28d261004dad9a186e4a5e84a2927d76d29ea2bb0e7f1615 WatchSource:0}: Error finding container 1166c4abf433c9fb28d261004dad9a186e4a5e84a2927d76d29ea2bb0e7f1615: Status 404 returned error can't find the container with id 1166c4abf433c9fb28d261004dad9a186e4a5e84a2927d76d29ea2bb0e7f1615 Mar 14 06:43:06 crc kubenswrapper[4817]: I0314 06:43:06.667024 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" event={"ID":"1ed90032-69ba-4333-bc2b-e875ecd2cbf9","Type":"ContainerStarted","Data":"1166c4abf433c9fb28d261004dad9a186e4a5e84a2927d76d29ea2bb0e7f1615"} Mar 14 06:43:08 crc kubenswrapper[4817]: I0314 06:43:08.566395 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:43:08 crc kubenswrapper[4817]: I0314 06:43:08.567043 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:43:22 crc kubenswrapper[4817]: I0314 06:43:22.822550 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" event={"ID":"1ed90032-69ba-4333-bc2b-e875ecd2cbf9","Type":"ContainerStarted","Data":"aa3896f12cb116f22c44499afd27ddf37140cc04223ce41d9b0e8b79c989d3e4"} Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.566484 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.567049 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.567103 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.567868 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70880caa4a057b35883c3ab3eebd0dbd099513e1fdd2a5a5f0cfa99e7431ada4"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.567933 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://70880caa4a057b35883c3ab3eebd0dbd099513e1fdd2a5a5f0cfa99e7431ada4" gracePeriod=600 Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.994129 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="70880caa4a057b35883c3ab3eebd0dbd099513e1fdd2a5a5f0cfa99e7431ada4" exitCode=0 Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.994406 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"70880caa4a057b35883c3ab3eebd0dbd099513e1fdd2a5a5f0cfa99e7431ada4"} Mar 14 06:43:38 crc kubenswrapper[4817]: I0314 06:43:38.994441 4817 scope.go:117] "RemoveContainer" containerID="c9a29688f41a0f22b0b577011fc5cf5857d953776257075e81672a895b0f6c9f" Mar 14 06:43:40 crc kubenswrapper[4817]: I0314 06:43:40.007191 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235"} Mar 14 06:43:40 crc kubenswrapper[4817]: I0314 06:43:40.028019 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" podStartSLOduration=18.088874308 podStartE2EDuration="34.027993555s" podCreationTimestamp="2026-03-14 06:43:06 +0000 UTC" firstStartedPulling="2026-03-14 06:43:06.471055989 +0000 UTC m=+4240.509316735" lastFinishedPulling="2026-03-14 06:43:22.410175246 +0000 UTC m=+4256.448435982" observedRunningTime="2026-03-14 06:43:22.841411928 +0000 UTC m=+4256.879672674" watchObservedRunningTime="2026-03-14 06:43:40.027993555 +0000 UTC m=+4274.066254301" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.148568 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557844-d9mbn"] Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.150811 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.159458 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.159684 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.159866 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.167195 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557844-d9mbn"] Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.198040 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64h7\" (UniqueName: \"kubernetes.io/projected/f78075c1-f4bc-428d-95f2-d5fd7c12cd42-kube-api-access-l64h7\") pod \"auto-csr-approver-29557844-d9mbn\" (UID: \"f78075c1-f4bc-428d-95f2-d5fd7c12cd42\") " pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.300447 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l64h7\" (UniqueName: \"kubernetes.io/projected/f78075c1-f4bc-428d-95f2-d5fd7c12cd42-kube-api-access-l64h7\") pod \"auto-csr-approver-29557844-d9mbn\" (UID: \"f78075c1-f4bc-428d-95f2-d5fd7c12cd42\") " pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.327713 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l64h7\" (UniqueName: \"kubernetes.io/projected/f78075c1-f4bc-428d-95f2-d5fd7c12cd42-kube-api-access-l64h7\") pod \"auto-csr-approver-29557844-d9mbn\" (UID: \"f78075c1-f4bc-428d-95f2-d5fd7c12cd42\") " pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.489357 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:00 crc kubenswrapper[4817]: I0314 06:44:00.943421 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557844-d9mbn"] Mar 14 06:44:00 crc kubenswrapper[4817]: W0314 06:44:00.947648 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78075c1_f4bc_428d_95f2_d5fd7c12cd42.slice/crio-f6c56fa5fb04e53f50d5c788060c39b457608df0f53b115f5c5e30570cd4b07c WatchSource:0}: Error finding container f6c56fa5fb04e53f50d5c788060c39b457608df0f53b115f5c5e30570cd4b07c: Status 404 returned error can't find the container with id f6c56fa5fb04e53f50d5c788060c39b457608df0f53b115f5c5e30570cd4b07c Mar 14 06:44:01 crc kubenswrapper[4817]: I0314 06:44:01.233261 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" event={"ID":"f78075c1-f4bc-428d-95f2-d5fd7c12cd42","Type":"ContainerStarted","Data":"f6c56fa5fb04e53f50d5c788060c39b457608df0f53b115f5c5e30570cd4b07c"} Mar 14 06:44:02 crc kubenswrapper[4817]: I0314 06:44:02.247207 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" event={"ID":"f78075c1-f4bc-428d-95f2-d5fd7c12cd42","Type":"ContainerStarted","Data":"2f31dc023ad7b19953abda0c921b134cc90bbc4786f0c7125a260fae0f344696"} Mar 14 06:44:02 crc kubenswrapper[4817]: I0314 06:44:02.263193 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" podStartSLOduration=1.518341311 podStartE2EDuration="2.263171755s" podCreationTimestamp="2026-03-14 06:44:00 +0000 UTC" firstStartedPulling="2026-03-14 06:44:00.951182257 +0000 UTC m=+4294.989443003" lastFinishedPulling="2026-03-14 06:44:01.696012661 +0000 UTC m=+4295.734273447" observedRunningTime="2026-03-14 06:44:02.262092644 +0000 UTC m=+4296.300353410" watchObservedRunningTime="2026-03-14 06:44:02.263171755 +0000 UTC m=+4296.301432501" Mar 14 06:44:03 crc kubenswrapper[4817]: I0314 06:44:03.258028 4817 generic.go:334] "Generic (PLEG): container finished" podID="f78075c1-f4bc-428d-95f2-d5fd7c12cd42" containerID="2f31dc023ad7b19953abda0c921b134cc90bbc4786f0c7125a260fae0f344696" exitCode=0 Mar 14 06:44:03 crc kubenswrapper[4817]: I0314 06:44:03.258086 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" event={"ID":"f78075c1-f4bc-428d-95f2-d5fd7c12cd42","Type":"ContainerDied","Data":"2f31dc023ad7b19953abda0c921b134cc90bbc4786f0c7125a260fae0f344696"} Mar 14 06:44:04 crc kubenswrapper[4817]: I0314 06:44:04.632364 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:04 crc kubenswrapper[4817]: I0314 06:44:04.693163 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l64h7\" (UniqueName: \"kubernetes.io/projected/f78075c1-f4bc-428d-95f2-d5fd7c12cd42-kube-api-access-l64h7\") pod \"f78075c1-f4bc-428d-95f2-d5fd7c12cd42\" (UID: \"f78075c1-f4bc-428d-95f2-d5fd7c12cd42\") " Mar 14 06:44:04 crc kubenswrapper[4817]: I0314 06:44:04.700304 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78075c1-f4bc-428d-95f2-d5fd7c12cd42-kube-api-access-l64h7" (OuterVolumeSpecName: "kube-api-access-l64h7") pod "f78075c1-f4bc-428d-95f2-d5fd7c12cd42" (UID: "f78075c1-f4bc-428d-95f2-d5fd7c12cd42"). InnerVolumeSpecName "kube-api-access-l64h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:44:04 crc kubenswrapper[4817]: I0314 06:44:04.797214 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l64h7\" (UniqueName: \"kubernetes.io/projected/f78075c1-f4bc-428d-95f2-d5fd7c12cd42-kube-api-access-l64h7\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:05 crc kubenswrapper[4817]: I0314 06:44:05.282883 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" event={"ID":"f78075c1-f4bc-428d-95f2-d5fd7c12cd42","Type":"ContainerDied","Data":"f6c56fa5fb04e53f50d5c788060c39b457608df0f53b115f5c5e30570cd4b07c"} Mar 14 06:44:05 crc kubenswrapper[4817]: I0314 06:44:05.282972 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6c56fa5fb04e53f50d5c788060c39b457608df0f53b115f5c5e30570cd4b07c" Mar 14 06:44:05 crc kubenswrapper[4817]: I0314 06:44:05.282998 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557844-d9mbn" Mar 14 06:44:05 crc kubenswrapper[4817]: I0314 06:44:05.337533 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557838-c9nd7"] Mar 14 06:44:05 crc kubenswrapper[4817]: I0314 06:44:05.346435 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557838-c9nd7"] Mar 14 06:44:06 crc kubenswrapper[4817]: I0314 06:44:06.745708 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab959af-c99f-4a90-80f3-2af0891ecdc7" path="/var/lib/kubelet/pods/3ab959af-c99f-4a90-80f3-2af0891ecdc7/volumes" Mar 14 06:44:08 crc kubenswrapper[4817]: I0314 06:44:08.321030 4817 generic.go:334] "Generic (PLEG): container finished" podID="1ed90032-69ba-4333-bc2b-e875ecd2cbf9" containerID="aa3896f12cb116f22c44499afd27ddf37140cc04223ce41d9b0e8b79c989d3e4" exitCode=0 Mar 14 06:44:08 crc kubenswrapper[4817]: I0314 06:44:08.321153 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" event={"ID":"1ed90032-69ba-4333-bc2b-e875ecd2cbf9","Type":"ContainerDied","Data":"aa3896f12cb116f22c44499afd27ddf37140cc04223ce41d9b0e8b79c989d3e4"} Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.445216 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.481067 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-hs24x"] Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.490200 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-hs24x"] Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.586118 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqp6b\" (UniqueName: \"kubernetes.io/projected/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-kube-api-access-cqp6b\") pod \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.586370 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-host\") pod \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\" (UID: \"1ed90032-69ba-4333-bc2b-e875ecd2cbf9\") " Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.587056 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-host" (OuterVolumeSpecName: "host") pod "1ed90032-69ba-4333-bc2b-e875ecd2cbf9" (UID: "1ed90032-69ba-4333-bc2b-e875ecd2cbf9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.597331 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-kube-api-access-cqp6b" (OuterVolumeSpecName: "kube-api-access-cqp6b") pod "1ed90032-69ba-4333-bc2b-e875ecd2cbf9" (UID: "1ed90032-69ba-4333-bc2b-e875ecd2cbf9"). InnerVolumeSpecName "kube-api-access-cqp6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.688528 4817 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-host\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:09 crc kubenswrapper[4817]: I0314 06:44:09.688561 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqp6b\" (UniqueName: \"kubernetes.io/projected/1ed90032-69ba-4333-bc2b-e875ecd2cbf9-kube-api-access-cqp6b\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.492312 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1166c4abf433c9fb28d261004dad9a186e4a5e84a2927d76d29ea2bb0e7f1615" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.492380 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-hs24x" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.663318 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-zf2qw"] Mar 14 06:44:10 crc kubenswrapper[4817]: E0314 06:44:10.664017 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed90032-69ba-4333-bc2b-e875ecd2cbf9" containerName="container-00" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.664037 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed90032-69ba-4333-bc2b-e875ecd2cbf9" containerName="container-00" Mar 14 06:44:10 crc kubenswrapper[4817]: E0314 06:44:10.664046 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78075c1-f4bc-428d-95f2-d5fd7c12cd42" containerName="oc" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.664053 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78075c1-f4bc-428d-95f2-d5fd7c12cd42" containerName="oc" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.664238 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed90032-69ba-4333-bc2b-e875ecd2cbf9" containerName="container-00" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.664269 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78075c1-f4bc-428d-95f2-d5fd7c12cd42" containerName="oc" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.664950 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.744199 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed90032-69ba-4333-bc2b-e875ecd2cbf9" path="/var/lib/kubelet/pods/1ed90032-69ba-4333-bc2b-e875ecd2cbf9/volumes" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.813211 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-host\") pod \"crc-debug-zf2qw\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.813371 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dzzz\" (UniqueName: \"kubernetes.io/projected/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-kube-api-access-9dzzz\") pod \"crc-debug-zf2qw\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.915727 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-host\") pod \"crc-debug-zf2qw\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.917177 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dzzz\" (UniqueName: \"kubernetes.io/projected/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-kube-api-access-9dzzz\") pod \"crc-debug-zf2qw\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.916227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-host\") pod \"crc-debug-zf2qw\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.936267 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dzzz\" (UniqueName: \"kubernetes.io/projected/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-kube-api-access-9dzzz\") pod \"crc-debug-zf2qw\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:10 crc kubenswrapper[4817]: I0314 06:44:10.980872 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:11 crc kubenswrapper[4817]: W0314 06:44:11.011414 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9e67ad0_3ba9_43b5_ad2b_2d13cb346b9e.slice/crio-16189a3588d6a70ccd4f7592d9ff12e1178435b710cda2e806120d98f3e696cd WatchSource:0}: Error finding container 16189a3588d6a70ccd4f7592d9ff12e1178435b710cda2e806120d98f3e696cd: Status 404 returned error can't find the container with id 16189a3588d6a70ccd4f7592d9ff12e1178435b710cda2e806120d98f3e696cd Mar 14 06:44:11 crc kubenswrapper[4817]: I0314 06:44:11.504664 4817 generic.go:334] "Generic (PLEG): container finished" podID="d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" containerID="8222c00c0a2df81eaeeffa86d3615c5f4d5713d0bcb6e5ea9fc72f5a8cc2ef81" exitCode=0 Mar 14 06:44:11 crc kubenswrapper[4817]: I0314 06:44:11.504737 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" event={"ID":"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e","Type":"ContainerDied","Data":"8222c00c0a2df81eaeeffa86d3615c5f4d5713d0bcb6e5ea9fc72f5a8cc2ef81"} Mar 14 06:44:11 crc kubenswrapper[4817]: I0314 06:44:11.505125 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" event={"ID":"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e","Type":"ContainerStarted","Data":"16189a3588d6a70ccd4f7592d9ff12e1178435b710cda2e806120d98f3e696cd"} Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.630412 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.753068 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dzzz\" (UniqueName: \"kubernetes.io/projected/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-kube-api-access-9dzzz\") pod \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.753177 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-host\") pod \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\" (UID: \"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e\") " Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.753846 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-host" (OuterVolumeSpecName: "host") pod "d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" (UID: "d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.754142 4817 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-host\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.766146 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-kube-api-access-9dzzz" (OuterVolumeSpecName: "kube-api-access-9dzzz") pod "d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" (UID: "d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e"). InnerVolumeSpecName "kube-api-access-9dzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:44:12 crc kubenswrapper[4817]: I0314 06:44:12.855535 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dzzz\" (UniqueName: \"kubernetes.io/projected/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e-kube-api-access-9dzzz\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:13 crc kubenswrapper[4817]: I0314 06:44:13.533745 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" event={"ID":"d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e","Type":"ContainerDied","Data":"16189a3588d6a70ccd4f7592d9ff12e1178435b710cda2e806120d98f3e696cd"} Mar 14 06:44:13 crc kubenswrapper[4817]: I0314 06:44:13.533789 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16189a3588d6a70ccd4f7592d9ff12e1178435b710cda2e806120d98f3e696cd" Mar 14 06:44:13 crc kubenswrapper[4817]: I0314 06:44:13.533807 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-zf2qw" Mar 14 06:44:13 crc kubenswrapper[4817]: I0314 06:44:13.899277 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-zf2qw"] Mar 14 06:44:13 crc kubenswrapper[4817]: I0314 06:44:13.911060 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-zf2qw"] Mar 14 06:44:14 crc kubenswrapper[4817]: I0314 06:44:14.747613 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" path="/var/lib/kubelet/pods/d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e/volumes" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.097635 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-b8k2r"] Mar 14 06:44:15 crc kubenswrapper[4817]: E0314 06:44:15.098175 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" containerName="container-00" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.098194 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" containerName="container-00" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.098445 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e67ad0-3ba9-43b5-ad2b-2d13cb346b9e" containerName="container-00" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.099119 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.205629 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cbfaef-82b2-4e37-969a-3c4457b31405-host\") pod \"crc-debug-b8k2r\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.206049 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkw8j\" (UniqueName: \"kubernetes.io/projected/b4cbfaef-82b2-4e37-969a-3c4457b31405-kube-api-access-pkw8j\") pod \"crc-debug-b8k2r\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.308168 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cbfaef-82b2-4e37-969a-3c4457b31405-host\") pod \"crc-debug-b8k2r\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.308264 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkw8j\" (UniqueName: \"kubernetes.io/projected/b4cbfaef-82b2-4e37-969a-3c4457b31405-kube-api-access-pkw8j\") pod \"crc-debug-b8k2r\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.308321 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cbfaef-82b2-4e37-969a-3c4457b31405-host\") pod \"crc-debug-b8k2r\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.447505 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkw8j\" (UniqueName: \"kubernetes.io/projected/b4cbfaef-82b2-4e37-969a-3c4457b31405-kube-api-access-pkw8j\") pod \"crc-debug-b8k2r\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:15 crc kubenswrapper[4817]: I0314 06:44:15.719940 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:16 crc kubenswrapper[4817]: I0314 06:44:16.561404 4817 generic.go:334] "Generic (PLEG): container finished" podID="b4cbfaef-82b2-4e37-969a-3c4457b31405" containerID="6ad373b2e5e312cce3525d56aa1e3e261cea6e62101f9edd1d9ce936f33470f6" exitCode=0 Mar 14 06:44:16 crc kubenswrapper[4817]: I0314 06:44:16.561491 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" event={"ID":"b4cbfaef-82b2-4e37-969a-3c4457b31405","Type":"ContainerDied","Data":"6ad373b2e5e312cce3525d56aa1e3e261cea6e62101f9edd1d9ce936f33470f6"} Mar 14 06:44:16 crc kubenswrapper[4817]: I0314 06:44:16.562078 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" event={"ID":"b4cbfaef-82b2-4e37-969a-3c4457b31405","Type":"ContainerStarted","Data":"d69f1968b31b9d0b2b06efc65870199aad6090b9d6e97241e6609ec559b3eab7"} Mar 14 06:44:16 crc kubenswrapper[4817]: I0314 06:44:16.606505 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-b8k2r"] Mar 14 06:44:16 crc kubenswrapper[4817]: I0314 06:44:16.617314 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r5dsk/crc-debug-b8k2r"] Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.685619 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.758791 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cbfaef-82b2-4e37-969a-3c4457b31405-host\") pod \"b4cbfaef-82b2-4e37-969a-3c4457b31405\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.758932 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkw8j\" (UniqueName: \"kubernetes.io/projected/b4cbfaef-82b2-4e37-969a-3c4457b31405-kube-api-access-pkw8j\") pod \"b4cbfaef-82b2-4e37-969a-3c4457b31405\" (UID: \"b4cbfaef-82b2-4e37-969a-3c4457b31405\") " Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.759312 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4cbfaef-82b2-4e37-969a-3c4457b31405-host" (OuterVolumeSpecName: "host") pod "b4cbfaef-82b2-4e37-969a-3c4457b31405" (UID: "b4cbfaef-82b2-4e37-969a-3c4457b31405"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.760065 4817 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cbfaef-82b2-4e37-969a-3c4457b31405-host\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.765756 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cbfaef-82b2-4e37-969a-3c4457b31405-kube-api-access-pkw8j" (OuterVolumeSpecName: "kube-api-access-pkw8j") pod "b4cbfaef-82b2-4e37-969a-3c4457b31405" (UID: "b4cbfaef-82b2-4e37-969a-3c4457b31405"). InnerVolumeSpecName "kube-api-access-pkw8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:44:17 crc kubenswrapper[4817]: I0314 06:44:17.863092 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkw8j\" (UniqueName: \"kubernetes.io/projected/b4cbfaef-82b2-4e37-969a-3c4457b31405-kube-api-access-pkw8j\") on node \"crc\" DevicePath \"\"" Mar 14 06:44:18 crc kubenswrapper[4817]: I0314 06:44:18.594842 4817 scope.go:117] "RemoveContainer" containerID="6ad373b2e5e312cce3525d56aa1e3e261cea6e62101f9edd1d9ce936f33470f6" Mar 14 06:44:18 crc kubenswrapper[4817]: I0314 06:44:18.594924 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/crc-debug-b8k2r" Mar 14 06:44:18 crc kubenswrapper[4817]: I0314 06:44:18.747358 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cbfaef-82b2-4e37-969a-3c4457b31405" path="/var/lib/kubelet/pods/b4cbfaef-82b2-4e37-969a-3c4457b31405/volumes" Mar 14 06:44:37 crc kubenswrapper[4817]: I0314 06:44:37.359313 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-565cfb5466-k8v6z_af06e777-9e2e-437e-a013-cd5e83735ac0/barbican-api/0.log" Mar 14 06:44:37 crc kubenswrapper[4817]: I0314 06:44:37.364039 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-565cfb5466-k8v6z_af06e777-9e2e-437e-a013-cd5e83735ac0/barbican-api-log/0.log" Mar 14 06:44:37 crc kubenswrapper[4817]: I0314 06:44:37.547449 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7556b9f8-gtmkg_cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0/barbican-keystone-listener/0.log" Mar 14 06:44:37 crc kubenswrapper[4817]: I0314 06:44:37.667833 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-658f9b4fd7-k22b5_a737974d-6611-4a56-9bbb-27256380ae54/barbican-worker/0.log" Mar 14 06:44:37 crc kubenswrapper[4817]: I0314 06:44:37.809749 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-658f9b4fd7-k22b5_a737974d-6611-4a56-9bbb-27256380ae54/barbican-worker-log/0.log" Mar 14 06:44:37 crc kubenswrapper[4817]: I0314 06:44:37.849147 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7556b9f8-gtmkg_cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0/barbican-keystone-listener-log/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.006777 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c_03575d81-89e3-4d1a-a27a-5aad81319453/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.182260 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/proxy-httpd/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.224004 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/ceilometer-notification-agent/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.226354 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/ceilometer-central-agent/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.292002 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/sg-core/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.405322 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-knzng_b24eeb13-77b4-4662-90f0-933ae091cfe2/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:38 crc kubenswrapper[4817]: I0314 06:44:38.516339 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn_bfe316ac-01fd-4838-b92a-7899469d769f/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.081104 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_937e0c39-f135-482a-b4f8-388fbd9a11bd/cinder-api/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.112122 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_937e0c39-f135-482a-b4f8-388fbd9a11bd/cinder-api-log/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.228577 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1b031afc-6d59-484d-8490-f684bbad769f/cinder-backup/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.345825 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1b031afc-6d59-484d-8490-f684bbad769f/probe/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.406474 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a37bd39-17a6-4c93-8146-b694d6e30b37/cinder-scheduler/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.459863 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a37bd39-17a6-4c93-8146-b694d6e30b37/probe/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.621756 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0f16a837-b3ad-4283-bd8e-19512d545253/cinder-volume/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.690196 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0f16a837-b3ad-4283-bd8e-19512d545253/probe/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.887866 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ndntk_7c8f94cd-c90d-40df-af0a-88ddf4730cbc/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:39 crc kubenswrapper[4817]: I0314 06:44:39.927291 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw_c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.050957 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-rc9hl_5405820a-1727-4506-aeef-6c081ec11d88/init/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.217091 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-rc9hl_5405820a-1727-4506-aeef-6c081ec11d88/init/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.295770 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-rc9hl_5405820a-1727-4506-aeef-6c081ec11d88/dnsmasq-dns/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.360684 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7b55b75-81af-4e71-8710-7b050784fa23/glance-httpd/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.460510 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7b55b75-81af-4e71-8710-7b050784fa23/glance-log/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.567675 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ebec1c57-27fe-4039-acbc-ddfb06dede94/glance-httpd/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.586754 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ebec1c57-27fe-4039-acbc-ddfb06dede94/glance-log/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.848785 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-767cf48f8d-lxbdx_abfb19f7-bac6-45a5-953e-546d46435171/horizon/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.953642 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6_273c6104-8daf-4e5e-b87e-aaf48ee8ae1f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:40 crc kubenswrapper[4817]: I0314 06:44:40.979859 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-767cf48f8d-lxbdx_abfb19f7-bac6-45a5-953e-546d46435171/horizon-log/0.log" Mar 14 06:44:41 crc kubenswrapper[4817]: I0314 06:44:41.111389 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pr57t_d6d2380c-071b-413a-a854-7b25ae09401a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:41 crc kubenswrapper[4817]: I0314 06:44:41.268796 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29557801-4vr2l_3f78210b-4544-4dc2-8e6e-a873af162323/keystone-cron/0.log" Mar 14 06:44:41 crc kubenswrapper[4817]: I0314 06:44:41.437061 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_87a2e6ce-41f8-473e-ba22-d038bbef1de2/kube-state-metrics/0.log" Mar 14 06:44:41 crc kubenswrapper[4817]: I0314 06:44:41.596515 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn_921c813c-e71f-4a7b-b74c-a389c71e1d4f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.083993 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ea6cf2e0-1f09-4e7b-8e73-21363bcad511/probe/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.112206 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f23f27ea-4a4a-44ca-9b1c-457e8e4e397a/manila-api/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.244712 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ea6cf2e0-1f09-4e7b-8e73-21363bcad511/manila-scheduler/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.413290 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dc6f976cd-xr97w_bfd749ee-b04f-45eb-8a54-2594c1d4378f/keystone-api/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.476257 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59850886-78b9-425e-895a-4a0f48438dbd/probe/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.691626 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59850886-78b9-425e-895a-4a0f48438dbd/manila-share/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.890268 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f23f27ea-4a4a-44ca-9b1c-457e8e4e397a/manila-api-log/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.925635 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-549548697-x46rl_ea9ff5c7-bf16-488f-8289-cbc134c9416e/neutron-api/0.log" Mar 14 06:44:42 crc kubenswrapper[4817]: I0314 06:44:42.971652 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-549548697-x46rl_ea9ff5c7-bf16-488f-8289-cbc134c9416e/neutron-httpd/0.log" Mar 14 06:44:43 crc kubenswrapper[4817]: I0314 06:44:43.158414 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s_87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:43 crc kubenswrapper[4817]: I0314 06:44:43.468050 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd4cb5c8-0bac-4213-bc4d-42805d1b03f7/nova-api-log/0.log" Mar 14 06:44:43 crc kubenswrapper[4817]: I0314 06:44:43.537571 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4ffda2e9-2972-4633-aa0f-207e8095c237/nova-cell0-conductor-conductor/0.log" Mar 14 06:44:43 crc kubenswrapper[4817]: I0314 06:44:43.789617 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5af4f1fd-b11b-42bd-ba99-0a9658136ea0/nova-cell1-conductor-conductor/0.log" Mar 14 06:44:43 crc kubenswrapper[4817]: I0314 06:44:43.859107 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd4cb5c8-0bac-4213-bc4d-42805d1b03f7/nova-api-api/0.log" Mar 14 06:44:44 crc kubenswrapper[4817]: I0314 06:44:44.022596 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_acdb327b-4e5c-4ea0-bf01-a46f9e0034b0/nova-cell1-novncproxy-novncproxy/0.log" Mar 14 06:44:44 crc kubenswrapper[4817]: I0314 06:44:44.176251 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994_78a366f5-7ad6-43e0-be63-4c63cf2b21e8/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:44 crc kubenswrapper[4817]: I0314 06:44:44.533711 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d905ec00-43c0-4f8b-a52d-414b74697fb2/nova-metadata-log/0.log" Mar 14 06:44:44 crc kubenswrapper[4817]: I0314 06:44:44.762864 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e65fa238-9d14-4be7-ae7b-0b3eb077d575/nova-scheduler-scheduler/0.log" Mar 14 06:44:44 crc kubenswrapper[4817]: I0314 06:44:44.849795 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29e96de0-75d9-4da0-a41e-3b93a7274083/mysql-bootstrap/0.log" Mar 14 06:44:44 crc kubenswrapper[4817]: I0314 06:44:44.959534 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d905ec00-43c0-4f8b-a52d-414b74697fb2/nova-metadata-metadata/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.111758 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29e96de0-75d9-4da0-a41e-3b93a7274083/mysql-bootstrap/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.180753 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29e96de0-75d9-4da0-a41e-3b93a7274083/galera/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.246073 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_099647fc-5cd6-4547-9400-8df4b6016b50/mysql-bootstrap/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.459016 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_30bca2b3-da0c-4f63-b9fb-95c742af358e/openstackclient/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.484930 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_099647fc-5cd6-4547-9400-8df4b6016b50/mysql-bootstrap/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.515092 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_099647fc-5cd6-4547-9400-8df4b6016b50/galera/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.664492 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-944q4_e515005b-34d6-46cb-8486-cf2e09877f9d/openstack-network-exporter/0.log" Mar 14 06:44:45 crc kubenswrapper[4817]: I0314 06:44:45.782214 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nsn5q_9790d6d0-9013-42cf-bb3d-394f5fc292ba/ovn-controller/0.log" Mar 14 06:44:46 crc kubenswrapper[4817]: I0314 06:44:46.022609 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovsdb-server-init/0.log" Mar 14 06:44:46 crc kubenswrapper[4817]: I0314 06:44:46.230164 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovsdb-server/0.log" Mar 14 06:44:46 crc kubenswrapper[4817]: I0314 06:44:46.234407 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovs-vswitchd/0.log" Mar 14 06:44:46 crc kubenswrapper[4817]: I0314 06:44:46.250594 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovsdb-server-init/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.158245 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_094c63b8-a153-4ae4-90a5-d65b5718abd1/openstack-network-exporter/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.203647 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-52cpk_dd55a087-6a2d-4515-9774-96d247a25d52/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.237308 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_094c63b8-a153-4ae4-90a5-d65b5718abd1/ovn-northd/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.390839 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5d70fa19-d760-4bb0-b182-c1bcf7797f96/openstack-network-exporter/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.458206 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5d70fa19-d760-4bb0-b182-c1bcf7797f96/ovsdbserver-nb/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.481319 4817 scope.go:117] "RemoveContainer" containerID="62ba521c7c58c0e3b274ae414e2e978d567ee757202b0f6b96a4956b96789080" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.686878 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_523f27ff-3994-4742-af55-15befc50017e/openstack-network-exporter/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.726593 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_523f27ff-3994-4742-af55-15befc50017e/ovsdbserver-sb/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.847791 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67dfb54788-qqrtk_5aad3d14-3e24-460a-b6b3-9508031f76d6/placement-api/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.923510 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9ecd33b-91dc-44fc-932d-d962a7835af9/setup-container/0.log" Mar 14 06:44:47 crc kubenswrapper[4817]: I0314 06:44:47.957243 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67dfb54788-qqrtk_5aad3d14-3e24-460a-b6b3-9508031f76d6/placement-log/0.log" Mar 14 06:44:48 crc kubenswrapper[4817]: I0314 06:44:48.149628 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9ecd33b-91dc-44fc-932d-d962a7835af9/setup-container/0.log" Mar 14 06:44:48 crc kubenswrapper[4817]: I0314 06:44:48.190131 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9ecd33b-91dc-44fc-932d-d962a7835af9/rabbitmq/0.log" Mar 14 06:44:48 crc kubenswrapper[4817]: I0314 06:44:48.311032 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da2757e1-ec61-4c1f-8060-3da273bd77cd/setup-container/0.log" Mar 14 06:44:48 crc kubenswrapper[4817]: I0314 06:44:48.498858 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da2757e1-ec61-4c1f-8060-3da273bd77cd/setup-container/0.log" Mar 14 06:44:48 crc kubenswrapper[4817]: I0314 06:44:48.500151 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da2757e1-ec61-4c1f-8060-3da273bd77cd/rabbitmq/0.log" Mar 14 06:44:48 crc kubenswrapper[4817]: I0314 06:44:48.568792 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt_350b95d7-fdff-421f-bb13-2b9b307e0918/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:49 crc kubenswrapper[4817]: I0314 06:44:49.089965 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-h5rd5_9ccb7da0-02de-4f65-9b76-6c8c0a47a34e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:49 crc kubenswrapper[4817]: I0314 06:44:49.118918 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-76mst_b9ce930d-a273-4240-ad64-19c4d50a3ec6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:44:49 crc kubenswrapper[4817]: I0314 06:44:49.387823 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-g7866_9f8afb0b-9422-463a-86d7-9c59bcfac32f/ssh-known-hosts-edpm-deployment/0.log" Mar 14 06:44:49 crc kubenswrapper[4817]: I0314 06:44:49.464363 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_106079f9-3258-4c46-8ef4-1811c407fc69/tempest-tests-tempest-tests-runner/0.log" Mar 14 06:44:49 crc kubenswrapper[4817]: I0314 06:44:49.767378 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_068e1487-9973-4256-8646-2ef08528eeda/test-operator-logs-container/0.log" Mar 14 06:44:49 crc kubenswrapper[4817]: I0314 06:44:49.810860 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj_b73850a9-8701-4b80-8944-a762eaa7cf5e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.159869 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f"] Mar 14 06:45:00 crc kubenswrapper[4817]: E0314 06:45:00.160736 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cbfaef-82b2-4e37-969a-3c4457b31405" containerName="container-00" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.160749 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cbfaef-82b2-4e37-969a-3c4457b31405" containerName="container-00" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.160960 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cbfaef-82b2-4e37-969a-3c4457b31405" containerName="container-00" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.161592 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.164775 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.164842 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.184494 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f"] Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.341297 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a4bb52-ce0d-4061-b197-e524555e09e1-secret-volume\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.341594 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a4bb52-ce0d-4061-b197-e524555e09e1-config-volume\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.341836 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2v47\" (UniqueName: \"kubernetes.io/projected/18a4bb52-ce0d-4061-b197-e524555e09e1-kube-api-access-l2v47\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.444247 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a4bb52-ce0d-4061-b197-e524555e09e1-secret-volume\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.444441 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a4bb52-ce0d-4061-b197-e524555e09e1-config-volume\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.444505 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2v47\" (UniqueName: \"kubernetes.io/projected/18a4bb52-ce0d-4061-b197-e524555e09e1-kube-api-access-l2v47\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.445678 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a4bb52-ce0d-4061-b197-e524555e09e1-config-volume\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.459116 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a4bb52-ce0d-4061-b197-e524555e09e1-secret-volume\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.463661 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2v47\" (UniqueName: \"kubernetes.io/projected/18a4bb52-ce0d-4061-b197-e524555e09e1-kube-api-access-l2v47\") pod \"collect-profiles-29557845-pdz4f\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:00 crc kubenswrapper[4817]: I0314 06:45:00.500350 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:01 crc kubenswrapper[4817]: I0314 06:45:01.001361 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f"] Mar 14 06:45:01 crc kubenswrapper[4817]: I0314 06:45:01.970618 4817 generic.go:334] "Generic (PLEG): container finished" podID="18a4bb52-ce0d-4061-b197-e524555e09e1" containerID="38d19df33068a9adedf2c2cdc5f184c10389a748c12f35d3ba468bc2fb7af97f" exitCode=0 Mar 14 06:45:01 crc kubenswrapper[4817]: I0314 06:45:01.970972 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" event={"ID":"18a4bb52-ce0d-4061-b197-e524555e09e1","Type":"ContainerDied","Data":"38d19df33068a9adedf2c2cdc5f184c10389a748c12f35d3ba468bc2fb7af97f"} Mar 14 06:45:01 crc kubenswrapper[4817]: I0314 06:45:01.971002 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" event={"ID":"18a4bb52-ce0d-4061-b197-e524555e09e1","Type":"ContainerStarted","Data":"ac7db0cf8f126476f6b4b3a6a93e0b00e05ab4926288326d682b029fab258772"} Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.365776 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.515125 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a4bb52-ce0d-4061-b197-e524555e09e1-secret-volume\") pod \"18a4bb52-ce0d-4061-b197-e524555e09e1\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.515170 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2v47\" (UniqueName: \"kubernetes.io/projected/18a4bb52-ce0d-4061-b197-e524555e09e1-kube-api-access-l2v47\") pod \"18a4bb52-ce0d-4061-b197-e524555e09e1\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.515219 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a4bb52-ce0d-4061-b197-e524555e09e1-config-volume\") pod \"18a4bb52-ce0d-4061-b197-e524555e09e1\" (UID: \"18a4bb52-ce0d-4061-b197-e524555e09e1\") " Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.516796 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a4bb52-ce0d-4061-b197-e524555e09e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "18a4bb52-ce0d-4061-b197-e524555e09e1" (UID: "18a4bb52-ce0d-4061-b197-e524555e09e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.523009 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a4bb52-ce0d-4061-b197-e524555e09e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18a4bb52-ce0d-4061-b197-e524555e09e1" (UID: "18a4bb52-ce0d-4061-b197-e524555e09e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.530191 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a4bb52-ce0d-4061-b197-e524555e09e1-kube-api-access-l2v47" (OuterVolumeSpecName: "kube-api-access-l2v47") pod "18a4bb52-ce0d-4061-b197-e524555e09e1" (UID: "18a4bb52-ce0d-4061-b197-e524555e09e1"). InnerVolumeSpecName "kube-api-access-l2v47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.618975 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a4bb52-ce0d-4061-b197-e524555e09e1-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.619019 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2v47\" (UniqueName: \"kubernetes.io/projected/18a4bb52-ce0d-4061-b197-e524555e09e1-kube-api-access-l2v47\") on node \"crc\" DevicePath \"\"" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.619033 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a4bb52-ce0d-4061-b197-e524555e09e1-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.747314 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e/memcached/0.log" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.995477 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" event={"ID":"18a4bb52-ce0d-4061-b197-e524555e09e1","Type":"ContainerDied","Data":"ac7db0cf8f126476f6b4b3a6a93e0b00e05ab4926288326d682b029fab258772"} Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.995516 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7db0cf8f126476f6b4b3a6a93e0b00e05ab4926288326d682b029fab258772" Mar 14 06:45:03 crc kubenswrapper[4817]: I0314 06:45:03.995529 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557845-pdz4f" Mar 14 06:45:04 crc kubenswrapper[4817]: I0314 06:45:04.447503 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4"] Mar 14 06:45:04 crc kubenswrapper[4817]: I0314 06:45:04.457508 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557800-fmxs4"] Mar 14 06:45:04 crc kubenswrapper[4817]: I0314 06:45:04.744960 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65221d2c-5834-4b80-a1b6-a0240e6d77de" path="/var/lib/kubelet/pods/65221d2c-5834-4b80-a1b6-a0240e6d77de/volumes" Mar 14 06:45:19 crc kubenswrapper[4817]: I0314 06:45:19.815187 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/util/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.370551 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/pull/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.404096 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/pull/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.433277 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/util/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.577833 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/util/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.621622 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/pull/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.629392 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/extract/0.log" Mar 14 06:45:20 crc kubenswrapper[4817]: I0314 06:45:20.820701 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-d47688694-7cvhh_71a2aa8e-73f2-46c2-b8ad-2230259a3ede/manager/0.log" Mar 14 06:45:21 crc kubenswrapper[4817]: I0314 06:45:21.030629 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-c4p8n_c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50/manager/0.log" Mar 14 06:45:21 crc kubenswrapper[4817]: I0314 06:45:21.357016 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-qrpsc_0d8de2cd-0cc8-40b5-a549-7632e38e11a9/manager/0.log" Mar 14 06:45:21 crc kubenswrapper[4817]: I0314 06:45:21.381918 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-lssxv_78bda46b-797b-4cc4-9cf5-14a2bc692947/manager/0.log" Mar 14 06:45:21 crc kubenswrapper[4817]: I0314 06:45:21.637543 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-9w9wj_2b1ddd08-cff6-4ec8-b701-77ad200ebd2f/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.089184 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bc894d9b-blx9z_895e1039-8354-4cb0-85d0-a0b2cc112db6/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.380004 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-mxnmp_2d86f031-6e59-41cb-a7c1-cfe91c54630b/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.435445 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54dc5b8f8d-sbktt_15bddcc1-f479-4390-96f1-f0fd2cd43578/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.441436 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-vsjwb_f1029de4-c046-47b8-820b-113369bf590a/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.636674 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6b74cf5dc5-4tws4_6dc5b773-7e09-4d0c-b7fb-e73a398784dd/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.643240 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf_3d2cc81d-8675-4b17-a429-c0a29be998d9/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.906589 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-gzl7j_4e484bfe-93ce-49cb-b687-2fd92d4a8b60/manager/0.log" Mar 14 06:45:22 crc kubenswrapper[4817]: I0314 06:45:22.926737 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f84474648-kzrj9_caf0784a-accc-4973-8daf-7239f91eacb3/manager/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.059582 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-xvl5j_7435e15d-0e12-4192-9725-59c501707754/manager/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.108053 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7v78tk_2c529641-9582-4a57-b54d-f1f733f21a89/manager/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.355909 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bf7c47ddb-srdqq_32b598c5-f4cd-4c5d-9189-e8985b451ae2/operator/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.453660 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-686t2_e2e7b4b7-9377-4f51-92ff-8d8024a13484/registry-server/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.683834 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-75w5q_12489bc5-14ae-42cb-9717-341e479b9e53/manager/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.733930 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-hs4lc_5b65131d-d231-46ae-b5f9-95c9e4a0d69a/manager/0.log" Mar 14 06:45:23 crc kubenswrapper[4817]: I0314 06:45:23.923512 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6dknb_9d193754-974c-4c1a-a142-28fc5f109935/operator/0.log" Mar 14 06:45:24 crc kubenswrapper[4817]: I0314 06:45:24.029848 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7f9cc5dd44-nb94v_be9a0979-c4ed-471b-9cc9-c3dd753f106d/manager/0.log" Mar 14 06:45:24 crc kubenswrapper[4817]: I0314 06:45:24.232361 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6854b8b9d9-cr7p6_fa24c106-dfe2-4250-9b00-b063f21f0dcd/manager/0.log" Mar 14 06:45:24 crc kubenswrapper[4817]: I0314 06:45:24.269086 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-9j88f_762056c0-1243-4e41-87ea-242c1d082965/manager/0.log" Mar 14 06:45:24 crc kubenswrapper[4817]: I0314 06:45:24.469672 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-xrd4b_d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c/manager/0.log" Mar 14 06:45:24 crc kubenswrapper[4817]: I0314 06:45:24.703782 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bb879dbb8-vlsfx_1351dc38-2b39-4e57-869d-1b430e900250/manager/0.log" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.802111 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2jch"] Mar 14 06:45:25 crc kubenswrapper[4817]: E0314 06:45:25.803686 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a4bb52-ce0d-4061-b197-e524555e09e1" containerName="collect-profiles" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.803775 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a4bb52-ce0d-4061-b197-e524555e09e1" containerName="collect-profiles" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.804182 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a4bb52-ce0d-4061-b197-e524555e09e1" containerName="collect-profiles" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.805997 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.826558 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2jch"] Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.973176 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-utilities\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.973288 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxf96\" (UniqueName: \"kubernetes.io/projected/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-kube-api-access-dxf96\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:25 crc kubenswrapper[4817]: I0314 06:45:25.973623 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-catalog-content\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.075360 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-utilities\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.075418 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxf96\" (UniqueName: \"kubernetes.io/projected/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-kube-api-access-dxf96\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.075498 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-catalog-content\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.076068 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-catalog-content\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.076066 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-utilities\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.099831 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxf96\" (UniqueName: \"kubernetes.io/projected/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-kube-api-access-dxf96\") pod \"certified-operators-t2jch\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.126068 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:26 crc kubenswrapper[4817]: I0314 06:45:26.729984 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2jch"] Mar 14 06:45:27 crc kubenswrapper[4817]: I0314 06:45:27.228164 4817 generic.go:334] "Generic (PLEG): container finished" podID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerID="b65b9b440c83f8e6acab619d312aa790d8eb4cb5701ef326d7faddc45325a84f" exitCode=0 Mar 14 06:45:27 crc kubenswrapper[4817]: I0314 06:45:27.228292 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerDied","Data":"b65b9b440c83f8e6acab619d312aa790d8eb4cb5701ef326d7faddc45325a84f"} Mar 14 06:45:27 crc kubenswrapper[4817]: I0314 06:45:27.230591 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerStarted","Data":"535628609de3cd2b4061391c1b319d87e93b81c2588952ef621dafec12c46508"} Mar 14 06:45:28 crc kubenswrapper[4817]: I0314 06:45:28.242162 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerStarted","Data":"3cf1ceaecb99f9bf36af8266ab3fbaa6bf1d6b4d87a52b57784f46acedc6da1a"} Mar 14 06:45:29 crc kubenswrapper[4817]: I0314 06:45:29.252220 4817 generic.go:334] "Generic (PLEG): container finished" podID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerID="3cf1ceaecb99f9bf36af8266ab3fbaa6bf1d6b4d87a52b57784f46acedc6da1a" exitCode=0 Mar 14 06:45:29 crc kubenswrapper[4817]: I0314 06:45:29.252286 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerDied","Data":"3cf1ceaecb99f9bf36af8266ab3fbaa6bf1d6b4d87a52b57784f46acedc6da1a"} Mar 14 06:45:31 crc kubenswrapper[4817]: I0314 06:45:31.274808 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerStarted","Data":"48cbac54bed7a7d6fe4ce95e37509be765543f8d47a28b7d4b10da2a7155c84b"} Mar 14 06:45:31 crc kubenswrapper[4817]: I0314 06:45:31.295073 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2jch" podStartSLOduration=2.853721737 podStartE2EDuration="6.295045151s" podCreationTimestamp="2026-03-14 06:45:25 +0000 UTC" firstStartedPulling="2026-03-14 06:45:27.230245699 +0000 UTC m=+4381.268506445" lastFinishedPulling="2026-03-14 06:45:30.671569103 +0000 UTC m=+4384.709829859" observedRunningTime="2026-03-14 06:45:31.289551015 +0000 UTC m=+4385.327811781" watchObservedRunningTime="2026-03-14 06:45:31.295045151 +0000 UTC m=+4385.333305897" Mar 14 06:45:36 crc kubenswrapper[4817]: I0314 06:45:36.126217 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:36 crc kubenswrapper[4817]: I0314 06:45:36.126664 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:36 crc kubenswrapper[4817]: I0314 06:45:36.175225 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:36 crc kubenswrapper[4817]: I0314 06:45:36.377974 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:36 crc kubenswrapper[4817]: I0314 06:45:36.436688 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2jch"] Mar 14 06:45:38 crc kubenswrapper[4817]: I0314 06:45:38.339398 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2jch" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="registry-server" containerID="cri-o://48cbac54bed7a7d6fe4ce95e37509be765543f8d47a28b7d4b10da2a7155c84b" gracePeriod=2 Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.356683 4817 generic.go:334] "Generic (PLEG): container finished" podID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerID="48cbac54bed7a7d6fe4ce95e37509be765543f8d47a28b7d4b10da2a7155c84b" exitCode=0 Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.356938 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerDied","Data":"48cbac54bed7a7d6fe4ce95e37509be765543f8d47a28b7d4b10da2a7155c84b"} Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.547341 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.666743 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-catalog-content\") pod \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.666852 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxf96\" (UniqueName: \"kubernetes.io/projected/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-kube-api-access-dxf96\") pod \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.667168 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-utilities\") pod \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\" (UID: \"c21d7ab6-ed08-4d2c-8d06-4c06c1605575\") " Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.668234 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-utilities" (OuterVolumeSpecName: "utilities") pod "c21d7ab6-ed08-4d2c-8d06-4c06c1605575" (UID: "c21d7ab6-ed08-4d2c-8d06-4c06c1605575"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.674285 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-kube-api-access-dxf96" (OuterVolumeSpecName: "kube-api-access-dxf96") pod "c21d7ab6-ed08-4d2c-8d06-4c06c1605575" (UID: "c21d7ab6-ed08-4d2c-8d06-4c06c1605575"). InnerVolumeSpecName "kube-api-access-dxf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.700372 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c21d7ab6-ed08-4d2c-8d06-4c06c1605575" (UID: "c21d7ab6-ed08-4d2c-8d06-4c06c1605575"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.768983 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.769020 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxf96\" (UniqueName: \"kubernetes.io/projected/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-kube-api-access-dxf96\") on node \"crc\" DevicePath \"\"" Mar 14 06:45:39 crc kubenswrapper[4817]: I0314 06:45:39.769030 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21d7ab6-ed08-4d2c-8d06-4c06c1605575-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.376871 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2jch" event={"ID":"c21d7ab6-ed08-4d2c-8d06-4c06c1605575","Type":"ContainerDied","Data":"535628609de3cd2b4061391c1b319d87e93b81c2588952ef621dafec12c46508"} Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.377269 4817 scope.go:117] "RemoveContainer" containerID="48cbac54bed7a7d6fe4ce95e37509be765543f8d47a28b7d4b10da2a7155c84b" Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.377012 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2jch" Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.429327 4817 scope.go:117] "RemoveContainer" containerID="3cf1ceaecb99f9bf36af8266ab3fbaa6bf1d6b4d87a52b57784f46acedc6da1a" Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.441375 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2jch"] Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.454851 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2jch"] Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.668353 4817 scope.go:117] "RemoveContainer" containerID="b65b9b440c83f8e6acab619d312aa790d8eb4cb5701ef326d7faddc45325a84f" Mar 14 06:45:40 crc kubenswrapper[4817]: I0314 06:45:40.745826 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" path="/var/lib/kubelet/pods/c21d7ab6-ed08-4d2c-8d06-4c06c1605575/volumes" Mar 14 06:45:46 crc kubenswrapper[4817]: I0314 06:45:46.757730 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mgpqw_5f8f4a28-ea66-4a2a-8cc8-ad845efd3266/control-plane-machine-set-operator/0.log" Mar 14 06:45:46 crc kubenswrapper[4817]: I0314 06:45:46.964450 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7zj7n_9f2c60cc-433e-4f50-8eca-2fb0ddd7982d/kube-rbac-proxy/0.log" Mar 14 06:45:46 crc kubenswrapper[4817]: I0314 06:45:46.996510 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7zj7n_9f2c60cc-433e-4f50-8eca-2fb0ddd7982d/machine-api-operator/0.log" Mar 14 06:45:47 crc kubenswrapper[4817]: I0314 06:45:47.612917 4817 scope.go:117] "RemoveContainer" containerID="f4ba8d4057882ebb041febf063553bb8dbaba7b15d058cd6598536c175f1b67d" Mar 14 06:45:54 crc kubenswrapper[4817]: I0314 06:45:54.814237 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-549548697-x46rl" podUID="ea9ff5c7-bf16-488f-8289-cbc134c9416e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.148823 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557846-ppx5s"] Mar 14 06:46:00 crc kubenswrapper[4817]: E0314 06:46:00.149987 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="registry-server" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.150005 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="registry-server" Mar 14 06:46:00 crc kubenswrapper[4817]: E0314 06:46:00.150027 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="extract-content" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.150035 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="extract-content" Mar 14 06:46:00 crc kubenswrapper[4817]: E0314 06:46:00.150061 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="extract-utilities" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.150071 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="extract-utilities" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.150313 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21d7ab6-ed08-4d2c-8d06-4c06c1605575" containerName="registry-server" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.151194 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.154785 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.154861 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.164291 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557846-ppx5s"] Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.164723 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.288985 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfsb\" (UniqueName: \"kubernetes.io/projected/8d6bb502-38ee-47fe-9035-ae305cbb5a5d-kube-api-access-fgfsb\") pod \"auto-csr-approver-29557846-ppx5s\" (UID: \"8d6bb502-38ee-47fe-9035-ae305cbb5a5d\") " pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.393726 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgfsb\" (UniqueName: \"kubernetes.io/projected/8d6bb502-38ee-47fe-9035-ae305cbb5a5d-kube-api-access-fgfsb\") pod \"auto-csr-approver-29557846-ppx5s\" (UID: \"8d6bb502-38ee-47fe-9035-ae305cbb5a5d\") " pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.419789 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgfsb\" (UniqueName: \"kubernetes.io/projected/8d6bb502-38ee-47fe-9035-ae305cbb5a5d-kube-api-access-fgfsb\") pod \"auto-csr-approver-29557846-ppx5s\" (UID: \"8d6bb502-38ee-47fe-9035-ae305cbb5a5d\") " pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.470791 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:00 crc kubenswrapper[4817]: I0314 06:46:00.933460 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557846-ppx5s"] Mar 14 06:46:00 crc kubenswrapper[4817]: W0314 06:46:00.934516 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d6bb502_38ee_47fe_9035_ae305cbb5a5d.slice/crio-85317f4184d882ae296ebbd52a2285573ed9e27e3eff09625ca61dba4b2d2c1b WatchSource:0}: Error finding container 85317f4184d882ae296ebbd52a2285573ed9e27e3eff09625ca61dba4b2d2c1b: Status 404 returned error can't find the container with id 85317f4184d882ae296ebbd52a2285573ed9e27e3eff09625ca61dba4b2d2c1b Mar 14 06:46:01 crc kubenswrapper[4817]: I0314 06:46:01.175800 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wlf2j_5791cc48-c47e-41dd-9679-e38124d37511/cert-manager-controller/0.log" Mar 14 06:46:01 crc kubenswrapper[4817]: I0314 06:46:01.337331 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xpmdf_15c7af3c-d545-4e70-b954-29763522ee1f/cert-manager-cainjector/0.log" Mar 14 06:46:01 crc kubenswrapper[4817]: I0314 06:46:01.588609 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" event={"ID":"8d6bb502-38ee-47fe-9035-ae305cbb5a5d","Type":"ContainerStarted","Data":"85317f4184d882ae296ebbd52a2285573ed9e27e3eff09625ca61dba4b2d2c1b"} Mar 14 06:46:02 crc kubenswrapper[4817]: I0314 06:46:02.313443 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-64mg6_c9590aff-9888-44dc-ab0c-47959f244b5e/cert-manager-webhook/0.log" Mar 14 06:46:02 crc kubenswrapper[4817]: I0314 06:46:02.598707 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" event={"ID":"8d6bb502-38ee-47fe-9035-ae305cbb5a5d","Type":"ContainerStarted","Data":"fadf4bab782b639e9e0c55d7f9e65a374bb1a58b80b054d53b7970408d2a9e26"} Mar 14 06:46:02 crc kubenswrapper[4817]: I0314 06:46:02.613572 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" podStartSLOduration=1.4560075860000001 podStartE2EDuration="2.613550909s" podCreationTimestamp="2026-03-14 06:46:00 +0000 UTC" firstStartedPulling="2026-03-14 06:46:00.936962281 +0000 UTC m=+4414.975223027" lastFinishedPulling="2026-03-14 06:46:02.094505604 +0000 UTC m=+4416.132766350" observedRunningTime="2026-03-14 06:46:02.610845952 +0000 UTC m=+4416.649106708" watchObservedRunningTime="2026-03-14 06:46:02.613550909 +0000 UTC m=+4416.651811655" Mar 14 06:46:03 crc kubenswrapper[4817]: I0314 06:46:03.608674 4817 generic.go:334] "Generic (PLEG): container finished" podID="8d6bb502-38ee-47fe-9035-ae305cbb5a5d" containerID="fadf4bab782b639e9e0c55d7f9e65a374bb1a58b80b054d53b7970408d2a9e26" exitCode=0 Mar 14 06:46:03 crc kubenswrapper[4817]: I0314 06:46:03.608732 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" event={"ID":"8d6bb502-38ee-47fe-9035-ae305cbb5a5d","Type":"ContainerDied","Data":"fadf4bab782b639e9e0c55d7f9e65a374bb1a58b80b054d53b7970408d2a9e26"} Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.013999 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.090243 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgfsb\" (UniqueName: \"kubernetes.io/projected/8d6bb502-38ee-47fe-9035-ae305cbb5a5d-kube-api-access-fgfsb\") pod \"8d6bb502-38ee-47fe-9035-ae305cbb5a5d\" (UID: \"8d6bb502-38ee-47fe-9035-ae305cbb5a5d\") " Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.098162 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6bb502-38ee-47fe-9035-ae305cbb5a5d-kube-api-access-fgfsb" (OuterVolumeSpecName: "kube-api-access-fgfsb") pod "8d6bb502-38ee-47fe-9035-ae305cbb5a5d" (UID: "8d6bb502-38ee-47fe-9035-ae305cbb5a5d"). InnerVolumeSpecName "kube-api-access-fgfsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.192573 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgfsb\" (UniqueName: \"kubernetes.io/projected/8d6bb502-38ee-47fe-9035-ae305cbb5a5d-kube-api-access-fgfsb\") on node \"crc\" DevicePath \"\"" Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.628090 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" event={"ID":"8d6bb502-38ee-47fe-9035-ae305cbb5a5d","Type":"ContainerDied","Data":"85317f4184d882ae296ebbd52a2285573ed9e27e3eff09625ca61dba4b2d2c1b"} Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.628448 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85317f4184d882ae296ebbd52a2285573ed9e27e3eff09625ca61dba4b2d2c1b" Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.628162 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557846-ppx5s" Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.688444 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557840-mvz42"] Mar 14 06:46:05 crc kubenswrapper[4817]: I0314 06:46:05.699104 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557840-mvz42"] Mar 14 06:46:06 crc kubenswrapper[4817]: I0314 06:46:06.744402 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56529938-5eac-44ab-b44b-9b0cd6581384" path="/var/lib/kubelet/pods/56529938-5eac-44ab-b44b-9b0cd6581384/volumes" Mar 14 06:46:08 crc kubenswrapper[4817]: I0314 06:46:08.566234 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:46:08 crc kubenswrapper[4817]: I0314 06:46:08.566857 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:46:15 crc kubenswrapper[4817]: I0314 06:46:15.945675 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-q5lsz_2adf871a-8f81-480f-9f20-afe8cfeb93f5/nmstate-console-plugin/0.log" Mar 14 06:46:16 crc kubenswrapper[4817]: I0314 06:46:16.167218 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bkggf_c99a8dbf-19cc-401a-8569-0add8d2a31bb/nmstate-handler/0.log" Mar 14 06:46:16 crc kubenswrapper[4817]: I0314 06:46:16.253973 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-8frq4_6b497a2e-9cc7-4484-963f-b2e84eb3681a/kube-rbac-proxy/0.log" Mar 14 06:46:16 crc kubenswrapper[4817]: I0314 06:46:16.326418 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-8frq4_6b497a2e-9cc7-4484-963f-b2e84eb3681a/nmstate-metrics/0.log" Mar 14 06:46:16 crc kubenswrapper[4817]: I0314 06:46:16.365373 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-dnl76_1bf623dc-b3e0-45af-9273-bc1367d82ab3/nmstate-operator/0.log" Mar 14 06:46:16 crc kubenswrapper[4817]: I0314 06:46:16.516284 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-d565t_ccadd686-fc95-409d-b7a4-b2e797265e56/nmstate-webhook/0.log" Mar 14 06:46:38 crc kubenswrapper[4817]: I0314 06:46:38.565528 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:46:38 crc kubenswrapper[4817]: I0314 06:46:38.566371 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:46:45 crc kubenswrapper[4817]: I0314 06:46:45.882691 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-88cxz_e421e9b6-37af-4150-8a96-419fe1e1f267/kube-rbac-proxy/0.log" Mar 14 06:46:45 crc kubenswrapper[4817]: I0314 06:46:45.960927 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-88cxz_e421e9b6-37af-4150-8a96-419fe1e1f267/controller/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.167991 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.298904 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.318470 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.333710 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.386863 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.531565 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.553162 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.563736 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.579665 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.787756 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.787802 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.811109 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/controller/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.826677 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:46:46 crc kubenswrapper[4817]: I0314 06:46:46.984036 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/frr-metrics/0.log" Mar 14 06:46:47 crc kubenswrapper[4817]: I0314 06:46:47.056667 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/kube-rbac-proxy-frr/0.log" Mar 14 06:46:47 crc kubenswrapper[4817]: I0314 06:46:47.133867 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/kube-rbac-proxy/0.log" Mar 14 06:46:47 crc kubenswrapper[4817]: I0314 06:46:47.182347 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/reloader/0.log" Mar 14 06:46:47 crc kubenswrapper[4817]: I0314 06:46:47.365655 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-hqjc5_1393ab55-8647-4411-86f7-a034c8bbd227/frr-k8s-webhook-server/0.log" Mar 14 06:46:47 crc kubenswrapper[4817]: I0314 06:46:47.691220 4817 scope.go:117] "RemoveContainer" containerID="d2cb333d0f88db48d4c4c1d90b1d8c2869626918f9c25190fc3128e4e3d239b0" Mar 14 06:46:48 crc kubenswrapper[4817]: I0314 06:46:48.137238 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f7859bbb-rtk7b_55748352-cae5-4b0d-8d5d-ed70b1e62fbd/manager/0.log" Mar 14 06:46:48 crc kubenswrapper[4817]: I0314 06:46:48.194025 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-757d57bdfc-8q5gf_707a2b72-26b7-48a9-b7e6-dcf7989deb6b/webhook-server/0.log" Mar 14 06:46:48 crc kubenswrapper[4817]: I0314 06:46:48.572605 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jntvk_420d83f8-e6b6-4433-8e63-ae624bcf1241/kube-rbac-proxy/0.log" Mar 14 06:46:48 crc kubenswrapper[4817]: I0314 06:46:48.843937 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/frr/0.log" Mar 14 06:46:49 crc kubenswrapper[4817]: I0314 06:46:49.032785 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jntvk_420d83f8-e6b6-4433-8e63-ae624bcf1241/speaker/0.log" Mar 14 06:47:03 crc kubenswrapper[4817]: I0314 06:47:03.739556 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/util/0.log" Mar 14 06:47:03 crc kubenswrapper[4817]: I0314 06:47:03.954646 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/pull/0.log" Mar 14 06:47:03 crc kubenswrapper[4817]: I0314 06:47:03.961312 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/util/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.003410 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/pull/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.182048 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/pull/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.188513 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/util/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.213595 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/extract/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.334339 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/util/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.537782 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/util/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.543258 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/pull/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.562723 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/pull/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.726027 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/pull/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.727520 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/util/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.759596 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/extract/0.log" Mar 14 06:47:04 crc kubenswrapper[4817]: I0314 06:47:04.891223 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-utilities/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.086451 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-content/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.091697 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-content/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.130745 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-utilities/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.320883 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-utilities/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.412473 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-content/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.541103 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-utilities/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.825414 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-utilities/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.848264 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-content/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.851266 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-content/0.log" Mar 14 06:47:05 crc kubenswrapper[4817]: I0314 06:47:05.997880 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/registry-server/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.062353 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-content/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.142694 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-utilities/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.296585 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gtb86_22e59375-f50e-4050-aeeb-a305ffcb3572/marketplace-operator/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.335449 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/registry-server/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.488451 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-utilities/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.685305 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-content/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.709619 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-utilities/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.712164 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-content/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.899610 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-content/0.log" Mar 14 06:47:06 crc kubenswrapper[4817]: I0314 06:47:06.909374 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-utilities/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.075771 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/registry-server/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.100046 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-utilities/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.292261 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-content/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.326495 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-utilities/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.340821 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-content/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.586021 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-content/0.log" Mar 14 06:47:07 crc kubenswrapper[4817]: I0314 06:47:07.599507 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-utilities/0.log" Mar 14 06:47:08 crc kubenswrapper[4817]: I0314 06:47:08.178101 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/registry-server/0.log" Mar 14 06:47:08 crc kubenswrapper[4817]: I0314 06:47:08.565353 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:47:08 crc kubenswrapper[4817]: I0314 06:47:08.565709 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:47:08 crc kubenswrapper[4817]: I0314 06:47:08.565768 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:47:08 crc kubenswrapper[4817]: I0314 06:47:08.566625 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:47:08 crc kubenswrapper[4817]: I0314 06:47:08.566689 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" gracePeriod=600 Mar 14 06:47:08 crc kubenswrapper[4817]: E0314 06:47:08.854747 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:47:09 crc kubenswrapper[4817]: I0314 06:47:09.217034 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" exitCode=0 Mar 14 06:47:09 crc kubenswrapper[4817]: I0314 06:47:09.217089 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235"} Mar 14 06:47:09 crc kubenswrapper[4817]: I0314 06:47:09.217369 4817 scope.go:117] "RemoveContainer" containerID="70880caa4a057b35883c3ab3eebd0dbd099513e1fdd2a5a5f0cfa99e7431ada4" Mar 14 06:47:09 crc kubenswrapper[4817]: I0314 06:47:09.218082 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:47:09 crc kubenswrapper[4817]: E0314 06:47:09.218317 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:47:21 crc kubenswrapper[4817]: I0314 06:47:21.731806 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:47:21 crc kubenswrapper[4817]: E0314 06:47:21.732563 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:47:34 crc kubenswrapper[4817]: I0314 06:47:34.732308 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:47:34 crc kubenswrapper[4817]: E0314 06:47:34.733151 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:47:48 crc kubenswrapper[4817]: I0314 06:47:48.732637 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:47:48 crc kubenswrapper[4817]: E0314 06:47:48.734914 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:47:59 crc kubenswrapper[4817]: I0314 06:47:59.732372 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:47:59 crc kubenswrapper[4817]: E0314 06:47:59.733307 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.144933 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557848-lfg92"] Mar 14 06:48:00 crc kubenswrapper[4817]: E0314 06:48:00.145730 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6bb502-38ee-47fe-9035-ae305cbb5a5d" containerName="oc" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.145746 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6bb502-38ee-47fe-9035-ae305cbb5a5d" containerName="oc" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.146016 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6bb502-38ee-47fe-9035-ae305cbb5a5d" containerName="oc" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.146676 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.154364 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.155528 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.155728 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.173476 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557848-lfg92"] Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.280017 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzv85\" (UniqueName: \"kubernetes.io/projected/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8-kube-api-access-lzv85\") pod \"auto-csr-approver-29557848-lfg92\" (UID: \"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8\") " pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.383218 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzv85\" (UniqueName: \"kubernetes.io/projected/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8-kube-api-access-lzv85\") pod \"auto-csr-approver-29557848-lfg92\" (UID: \"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8\") " pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.405106 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzv85\" (UniqueName: \"kubernetes.io/projected/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8-kube-api-access-lzv85\") pod \"auto-csr-approver-29557848-lfg92\" (UID: \"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8\") " pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:00 crc kubenswrapper[4817]: I0314 06:48:00.477998 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:01 crc kubenswrapper[4817]: I0314 06:48:01.007020 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557848-lfg92"] Mar 14 06:48:01 crc kubenswrapper[4817]: I0314 06:48:01.014735 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:48:01 crc kubenswrapper[4817]: I0314 06:48:01.694637 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557848-lfg92" event={"ID":"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8","Type":"ContainerStarted","Data":"8d5c91208af3034d4dda8062820a2513d934c0dc5d69941fbce74cea66231033"} Mar 14 06:48:02 crc kubenswrapper[4817]: I0314 06:48:02.703842 4817 generic.go:334] "Generic (PLEG): container finished" podID="d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8" containerID="f1305c7a640a97a873b799351d97dc105082383814f35c100feb0c4fc3de799a" exitCode=0 Mar 14 06:48:02 crc kubenswrapper[4817]: I0314 06:48:02.704571 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557848-lfg92" event={"ID":"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8","Type":"ContainerDied","Data":"f1305c7a640a97a873b799351d97dc105082383814f35c100feb0c4fc3de799a"} Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.157349 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.269674 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzv85\" (UniqueName: \"kubernetes.io/projected/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8-kube-api-access-lzv85\") pod \"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8\" (UID: \"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8\") " Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.276572 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8-kube-api-access-lzv85" (OuterVolumeSpecName: "kube-api-access-lzv85") pod "d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8" (UID: "d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8"). InnerVolumeSpecName "kube-api-access-lzv85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.371906 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzv85\" (UniqueName: \"kubernetes.io/projected/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8-kube-api-access-lzv85\") on node \"crc\" DevicePath \"\"" Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.727409 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557848-lfg92" event={"ID":"d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8","Type":"ContainerDied","Data":"8d5c91208af3034d4dda8062820a2513d934c0dc5d69941fbce74cea66231033"} Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.727738 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d5c91208af3034d4dda8062820a2513d934c0dc5d69941fbce74cea66231033" Mar 14 06:48:04 crc kubenswrapper[4817]: I0314 06:48:04.727808 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557848-lfg92" Mar 14 06:48:05 crc kubenswrapper[4817]: I0314 06:48:05.234292 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557842-glgk9"] Mar 14 06:48:05 crc kubenswrapper[4817]: I0314 06:48:05.245281 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557842-glgk9"] Mar 14 06:48:06 crc kubenswrapper[4817]: I0314 06:48:06.756967 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3db2c6-b136-4a92-bf84-ec29d8121c48" path="/var/lib/kubelet/pods/7e3db2c6-b136-4a92-bf84-ec29d8121c48/volumes" Mar 14 06:48:14 crc kubenswrapper[4817]: I0314 06:48:14.732206 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:48:14 crc kubenswrapper[4817]: E0314 06:48:14.733006 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:48:28 crc kubenswrapper[4817]: I0314 06:48:28.732049 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:48:28 crc kubenswrapper[4817]: E0314 06:48:28.732988 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:48:43 crc kubenswrapper[4817]: I0314 06:48:43.732644 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:48:43 crc kubenswrapper[4817]: E0314 06:48:43.733743 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:48:48 crc kubenswrapper[4817]: I0314 06:48:48.064994 4817 scope.go:117] "RemoveContainer" containerID="472393ed2bac0eb961abdf7f39efd4e01b4c6af9ece8d9c7d036b9c4d4b1bc98" Mar 14 06:48:56 crc kubenswrapper[4817]: I0314 06:48:56.745647 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:48:56 crc kubenswrapper[4817]: E0314 06:48:56.746624 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:49:11 crc kubenswrapper[4817]: I0314 06:49:11.734706 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:49:11 crc kubenswrapper[4817]: E0314 06:49:11.735918 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:49:13 crc kubenswrapper[4817]: I0314 06:49:13.447165 4817 generic.go:334] "Generic (PLEG): container finished" podID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerID="7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637" exitCode=0 Mar 14 06:49:13 crc kubenswrapper[4817]: I0314 06:49:13.447272 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" event={"ID":"3da5f098-d875-4421-97c1-f1a445fe18ea","Type":"ContainerDied","Data":"7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637"} Mar 14 06:49:13 crc kubenswrapper[4817]: I0314 06:49:13.448170 4817 scope.go:117] "RemoveContainer" containerID="7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637" Mar 14 06:49:14 crc kubenswrapper[4817]: I0314 06:49:14.141838 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r5dsk_must-gather-lmv7f_3da5f098-d875-4421-97c1-f1a445fe18ea/gather/0.log" Mar 14 06:49:21 crc kubenswrapper[4817]: I0314 06:49:21.488949 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-r5dsk/must-gather-lmv7f"] Mar 14 06:49:21 crc kubenswrapper[4817]: I0314 06:49:21.489627 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="copy" containerID="cri-o://1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03" gracePeriod=2 Mar 14 06:49:21 crc kubenswrapper[4817]: I0314 06:49:21.505163 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-r5dsk/must-gather-lmv7f"] Mar 14 06:49:21 crc kubenswrapper[4817]: I0314 06:49:21.960280 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r5dsk_must-gather-lmv7f_3da5f098-d875-4421-97c1-f1a445fe18ea/copy/0.log" Mar 14 06:49:21 crc kubenswrapper[4817]: I0314 06:49:21.960802 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.082246 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bg24\" (UniqueName: \"kubernetes.io/projected/3da5f098-d875-4421-97c1-f1a445fe18ea-kube-api-access-4bg24\") pod \"3da5f098-d875-4421-97c1-f1a445fe18ea\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.082329 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3da5f098-d875-4421-97c1-f1a445fe18ea-must-gather-output\") pod \"3da5f098-d875-4421-97c1-f1a445fe18ea\" (UID: \"3da5f098-d875-4421-97c1-f1a445fe18ea\") " Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.088787 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da5f098-d875-4421-97c1-f1a445fe18ea-kube-api-access-4bg24" (OuterVolumeSpecName: "kube-api-access-4bg24") pod "3da5f098-d875-4421-97c1-f1a445fe18ea" (UID: "3da5f098-d875-4421-97c1-f1a445fe18ea"). InnerVolumeSpecName "kube-api-access-4bg24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.185024 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bg24\" (UniqueName: \"kubernetes.io/projected/3da5f098-d875-4421-97c1-f1a445fe18ea-kube-api-access-4bg24\") on node \"crc\" DevicePath \"\"" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.251478 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da5f098-d875-4421-97c1-f1a445fe18ea-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3da5f098-d875-4421-97c1-f1a445fe18ea" (UID: "3da5f098-d875-4421-97c1-f1a445fe18ea"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.286697 4817 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3da5f098-d875-4421-97c1-f1a445fe18ea-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.547977 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-r5dsk_must-gather-lmv7f_3da5f098-d875-4421-97c1-f1a445fe18ea/copy/0.log" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.548583 4817 generic.go:334] "Generic (PLEG): container finished" podID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerID="1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03" exitCode=143 Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.548634 4817 scope.go:117] "RemoveContainer" containerID="1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.548647 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-r5dsk/must-gather-lmv7f" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.593444 4817 scope.go:117] "RemoveContainer" containerID="7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.643745 4817 scope.go:117] "RemoveContainer" containerID="1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03" Mar 14 06:49:22 crc kubenswrapper[4817]: E0314 06:49:22.644318 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03\": container with ID starting with 1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03 not found: ID does not exist" containerID="1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.644366 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03"} err="failed to get container status \"1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03\": rpc error: code = NotFound desc = could not find container \"1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03\": container with ID starting with 1f24320c44b34e2b8c779cd29b3953d69204080d4d599b77bcf056102f900b03 not found: ID does not exist" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.644394 4817 scope.go:117] "RemoveContainer" containerID="7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637" Mar 14 06:49:22 crc kubenswrapper[4817]: E0314 06:49:22.644718 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637\": container with ID starting with 7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637 not found: ID does not exist" containerID="7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.644768 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637"} err="failed to get container status \"7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637\": rpc error: code = NotFound desc = could not find container \"7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637\": container with ID starting with 7620dd64bd9549c7216785b2d3f6afd19caf8f975317ca3eb3e531367cf5e637 not found: ID does not exist" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.734111 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:49:22 crc kubenswrapper[4817]: E0314 06:49:22.734451 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:49:22 crc kubenswrapper[4817]: I0314 06:49:22.744350 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" path="/var/lib/kubelet/pods/3da5f098-d875-4421-97c1-f1a445fe18ea/volumes" Mar 14 06:49:37 crc kubenswrapper[4817]: I0314 06:49:37.732639 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:49:37 crc kubenswrapper[4817]: E0314 06:49:37.733427 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:49:48 crc kubenswrapper[4817]: I0314 06:49:48.148802 4817 scope.go:117] "RemoveContainer" containerID="aa3896f12cb116f22c44499afd27ddf37140cc04223ce41d9b0e8b79c989d3e4" Mar 14 06:49:49 crc kubenswrapper[4817]: I0314 06:49:49.732106 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:49:49 crc kubenswrapper[4817]: E0314 06:49:49.732576 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.142344 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557850-8m6zd"] Mar 14 06:50:00 crc kubenswrapper[4817]: E0314 06:50:00.143295 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="copy" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.143311 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="copy" Mar 14 06:50:00 crc kubenswrapper[4817]: E0314 06:50:00.143326 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="gather" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.143333 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="gather" Mar 14 06:50:00 crc kubenswrapper[4817]: E0314 06:50:00.143376 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8" containerName="oc" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.143385 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8" containerName="oc" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.143596 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="gather" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.143615 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8" containerName="oc" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.143642 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da5f098-d875-4421-97c1-f1a445fe18ea" containerName="copy" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.144470 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.147809 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.147862 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.148028 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.151446 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557850-8m6zd"] Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.200820 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwqg\" (UniqueName: \"kubernetes.io/projected/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61-kube-api-access-czwqg\") pod \"auto-csr-approver-29557850-8m6zd\" (UID: \"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61\") " pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.301568 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwqg\" (UniqueName: \"kubernetes.io/projected/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61-kube-api-access-czwqg\") pod \"auto-csr-approver-29557850-8m6zd\" (UID: \"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61\") " pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.336778 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwqg\" (UniqueName: \"kubernetes.io/projected/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61-kube-api-access-czwqg\") pod \"auto-csr-approver-29557850-8m6zd\" (UID: \"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61\") " pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:00 crc kubenswrapper[4817]: I0314 06:50:00.478809 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:01 crc kubenswrapper[4817]: I0314 06:50:01.391474 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557850-8m6zd"] Mar 14 06:50:01 crc kubenswrapper[4817]: I0314 06:50:01.919592 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" event={"ID":"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61","Type":"ContainerStarted","Data":"06f32213612ace28bc4337387e39b4d62c5c3f9818acce7e3fa87f808b88524d"} Mar 14 06:50:02 crc kubenswrapper[4817]: I0314 06:50:02.933163 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" event={"ID":"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61","Type":"ContainerStarted","Data":"25c685a01e92c8182e3972a316f563c9a4c0087058e6360967cb65b4235515e4"} Mar 14 06:50:02 crc kubenswrapper[4817]: I0314 06:50:02.951536 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" podStartSLOduration=1.6974592689999999 podStartE2EDuration="2.951513801s" podCreationTimestamp="2026-03-14 06:50:00 +0000 UTC" firstStartedPulling="2026-03-14 06:50:01.4015434 +0000 UTC m=+4655.439804146" lastFinishedPulling="2026-03-14 06:50:02.655597932 +0000 UTC m=+4656.693858678" observedRunningTime="2026-03-14 06:50:02.951175222 +0000 UTC m=+4656.989435988" watchObservedRunningTime="2026-03-14 06:50:02.951513801 +0000 UTC m=+4656.989774547" Mar 14 06:50:03 crc kubenswrapper[4817]: I0314 06:50:03.732228 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:50:03 crc kubenswrapper[4817]: E0314 06:50:03.732511 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:50:03 crc kubenswrapper[4817]: I0314 06:50:03.942376 4817 generic.go:334] "Generic (PLEG): container finished" podID="e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61" containerID="25c685a01e92c8182e3972a316f563c9a4c0087058e6360967cb65b4235515e4" exitCode=0 Mar 14 06:50:03 crc kubenswrapper[4817]: I0314 06:50:03.942441 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" event={"ID":"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61","Type":"ContainerDied","Data":"25c685a01e92c8182e3972a316f563c9a4c0087058e6360967cb65b4235515e4"} Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.330708 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.507132 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czwqg\" (UniqueName: \"kubernetes.io/projected/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61-kube-api-access-czwqg\") pod \"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61\" (UID: \"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61\") " Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.513048 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61-kube-api-access-czwqg" (OuterVolumeSpecName: "kube-api-access-czwqg") pod "e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61" (UID: "e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61"). InnerVolumeSpecName "kube-api-access-czwqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.610987 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czwqg\" (UniqueName: \"kubernetes.io/projected/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61-kube-api-access-czwqg\") on node \"crc\" DevicePath \"\"" Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.962878 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" event={"ID":"e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61","Type":"ContainerDied","Data":"06f32213612ace28bc4337387e39b4d62c5c3f9818acce7e3fa87f808b88524d"} Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.962942 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557850-8m6zd" Mar 14 06:50:05 crc kubenswrapper[4817]: I0314 06:50:05.962945 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f32213612ace28bc4337387e39b4d62c5c3f9818acce7e3fa87f808b88524d" Mar 14 06:50:06 crc kubenswrapper[4817]: I0314 06:50:06.041504 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557844-d9mbn"] Mar 14 06:50:06 crc kubenswrapper[4817]: I0314 06:50:06.049042 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557844-d9mbn"] Mar 14 06:50:06 crc kubenswrapper[4817]: I0314 06:50:06.743628 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78075c1-f4bc-428d-95f2-d5fd7c12cd42" path="/var/lib/kubelet/pods/f78075c1-f4bc-428d-95f2-d5fd7c12cd42/volumes" Mar 14 06:50:16 crc kubenswrapper[4817]: I0314 06:50:16.745490 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:50:16 crc kubenswrapper[4817]: E0314 06:50:16.746880 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:50:31 crc kubenswrapper[4817]: I0314 06:50:31.733286 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:50:31 crc kubenswrapper[4817]: E0314 06:50:31.734222 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:50:45 crc kubenswrapper[4817]: I0314 06:50:45.732210 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:50:45 crc kubenswrapper[4817]: E0314 06:50:45.733191 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:50:48 crc kubenswrapper[4817]: I0314 06:50:48.221634 4817 scope.go:117] "RemoveContainer" containerID="8222c00c0a2df81eaeeffa86d3615c5f4d5713d0bcb6e5ea9fc72f5a8cc2ef81" Mar 14 06:50:48 crc kubenswrapper[4817]: I0314 06:50:48.251524 4817 scope.go:117] "RemoveContainer" containerID="2f31dc023ad7b19953abda0c921b134cc90bbc4786f0c7125a260fae0f344696" Mar 14 06:51:00 crc kubenswrapper[4817]: I0314 06:51:00.732629 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:51:00 crc kubenswrapper[4817]: E0314 06:51:00.733437 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:51:12 crc kubenswrapper[4817]: I0314 06:51:12.733246 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:51:12 crc kubenswrapper[4817]: E0314 06:51:12.733941 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:51:25 crc kubenswrapper[4817]: I0314 06:51:25.731816 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:51:25 crc kubenswrapper[4817]: E0314 06:51:25.732644 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:51:40 crc kubenswrapper[4817]: I0314 06:51:40.732649 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:51:40 crc kubenswrapper[4817]: E0314 06:51:40.733721 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:51:52 crc kubenswrapper[4817]: I0314 06:51:52.732575 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:51:52 crc kubenswrapper[4817]: E0314 06:51:52.733401 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.171193 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557852-tvz6j"] Mar 14 06:52:00 crc kubenswrapper[4817]: E0314 06:52:00.172814 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61" containerName="oc" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.172952 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61" containerName="oc" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.173414 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61" containerName="oc" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.175400 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.178014 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.178743 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.179306 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.186770 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557852-tvz6j"] Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.248879 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf27\" (UniqueName: \"kubernetes.io/projected/2f66bec4-4a89-41af-89c2-0cc4c132cd5c-kube-api-access-bgf27\") pod \"auto-csr-approver-29557852-tvz6j\" (UID: \"2f66bec4-4a89-41af-89c2-0cc4c132cd5c\") " pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.349781 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgf27\" (UniqueName: \"kubernetes.io/projected/2f66bec4-4a89-41af-89c2-0cc4c132cd5c-kube-api-access-bgf27\") pod \"auto-csr-approver-29557852-tvz6j\" (UID: \"2f66bec4-4a89-41af-89c2-0cc4c132cd5c\") " pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.454767 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgf27\" (UniqueName: \"kubernetes.io/projected/2f66bec4-4a89-41af-89c2-0cc4c132cd5c-kube-api-access-bgf27\") pod \"auto-csr-approver-29557852-tvz6j\" (UID: \"2f66bec4-4a89-41af-89c2-0cc4c132cd5c\") " pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.509480 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:00 crc kubenswrapper[4817]: I0314 06:52:00.966009 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557852-tvz6j"] Mar 14 06:52:01 crc kubenswrapper[4817]: I0314 06:52:01.098092 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" event={"ID":"2f66bec4-4a89-41af-89c2-0cc4c132cd5c","Type":"ContainerStarted","Data":"b7fd7145fe0270dd0cdf611573092a8302b13e416c4dd81088ec2ea628e70bf1"} Mar 14 06:52:03 crc kubenswrapper[4817]: I0314 06:52:03.122135 4817 generic.go:334] "Generic (PLEG): container finished" podID="2f66bec4-4a89-41af-89c2-0cc4c132cd5c" containerID="490e9d82af386bf868da4cec87b3cf2eae83481c35ebac17ea6aee1de55bc79d" exitCode=0 Mar 14 06:52:03 crc kubenswrapper[4817]: I0314 06:52:03.122239 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" event={"ID":"2f66bec4-4a89-41af-89c2-0cc4c132cd5c","Type":"ContainerDied","Data":"490e9d82af386bf868da4cec87b3cf2eae83481c35ebac17ea6aee1de55bc79d"} Mar 14 06:52:03 crc kubenswrapper[4817]: I0314 06:52:03.732493 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:52:03 crc kubenswrapper[4817]: E0314 06:52:03.732989 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:52:04 crc kubenswrapper[4817]: I0314 06:52:04.525357 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:04 crc kubenswrapper[4817]: I0314 06:52:04.542508 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgf27\" (UniqueName: \"kubernetes.io/projected/2f66bec4-4a89-41af-89c2-0cc4c132cd5c-kube-api-access-bgf27\") pod \"2f66bec4-4a89-41af-89c2-0cc4c132cd5c\" (UID: \"2f66bec4-4a89-41af-89c2-0cc4c132cd5c\") " Mar 14 06:52:04 crc kubenswrapper[4817]: I0314 06:52:04.548450 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f66bec4-4a89-41af-89c2-0cc4c132cd5c-kube-api-access-bgf27" (OuterVolumeSpecName: "kube-api-access-bgf27") pod "2f66bec4-4a89-41af-89c2-0cc4c132cd5c" (UID: "2f66bec4-4a89-41af-89c2-0cc4c132cd5c"). InnerVolumeSpecName "kube-api-access-bgf27". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:52:04 crc kubenswrapper[4817]: I0314 06:52:04.645705 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgf27\" (UniqueName: \"kubernetes.io/projected/2f66bec4-4a89-41af-89c2-0cc4c132cd5c-kube-api-access-bgf27\") on node \"crc\" DevicePath \"\"" Mar 14 06:52:05 crc kubenswrapper[4817]: I0314 06:52:05.142591 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" event={"ID":"2f66bec4-4a89-41af-89c2-0cc4c132cd5c","Type":"ContainerDied","Data":"b7fd7145fe0270dd0cdf611573092a8302b13e416c4dd81088ec2ea628e70bf1"} Mar 14 06:52:05 crc kubenswrapper[4817]: I0314 06:52:05.142640 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7fd7145fe0270dd0cdf611573092a8302b13e416c4dd81088ec2ea628e70bf1" Mar 14 06:52:05 crc kubenswrapper[4817]: I0314 06:52:05.142749 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557852-tvz6j" Mar 14 06:52:05 crc kubenswrapper[4817]: I0314 06:52:05.603977 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557846-ppx5s"] Mar 14 06:52:05 crc kubenswrapper[4817]: I0314 06:52:05.612724 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557846-ppx5s"] Mar 14 06:52:06 crc kubenswrapper[4817]: I0314 06:52:06.744993 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6bb502-38ee-47fe-9035-ae305cbb5a5d" path="/var/lib/kubelet/pods/8d6bb502-38ee-47fe-9035-ae305cbb5a5d/volumes" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.213072 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6wn27"] Mar 14 06:52:11 crc kubenswrapper[4817]: E0314 06:52:11.215680 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f66bec4-4a89-41af-89c2-0cc4c132cd5c" containerName="oc" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.215720 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f66bec4-4a89-41af-89c2-0cc4c132cd5c" containerName="oc" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.216121 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f66bec4-4a89-41af-89c2-0cc4c132cd5c" containerName="oc" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.219123 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.227466 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wn27"] Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.303403 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-catalog-content\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.303499 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-utilities\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.303732 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jzrl\" (UniqueName: \"kubernetes.io/projected/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-kube-api-access-5jzrl\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.405322 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jzrl\" (UniqueName: \"kubernetes.io/projected/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-kube-api-access-5jzrl\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.405438 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-catalog-content\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.405474 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-utilities\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.406023 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-utilities\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.406113 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-catalog-content\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.428000 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jzrl\" (UniqueName: \"kubernetes.io/projected/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-kube-api-access-5jzrl\") pod \"redhat-marketplace-6wn27\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.542545 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:11 crc kubenswrapper[4817]: I0314 06:52:11.988218 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wn27"] Mar 14 06:52:13 crc kubenswrapper[4817]: I0314 06:52:13.236637 4817 generic.go:334] "Generic (PLEG): container finished" podID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerID="2324bfb7856fd50c5acc9e0c93056beec61801bffcd400cd7fde31a5f1b45bac" exitCode=0 Mar 14 06:52:13 crc kubenswrapper[4817]: I0314 06:52:13.236757 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wn27" event={"ID":"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669","Type":"ContainerDied","Data":"2324bfb7856fd50c5acc9e0c93056beec61801bffcd400cd7fde31a5f1b45bac"} Mar 14 06:52:13 crc kubenswrapper[4817]: I0314 06:52:13.238141 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wn27" event={"ID":"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669","Type":"ContainerStarted","Data":"d89184d77bd8e66fa11c767d6f0513ae82554619141a2fb5b8e5f74414f7f7f0"} Mar 14 06:52:15 crc kubenswrapper[4817]: I0314 06:52:15.257406 4817 generic.go:334] "Generic (PLEG): container finished" podID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerID="a716696b59334481dacbe0d6baeca524f34f2f1cac7d2435e21ac34699992ca4" exitCode=0 Mar 14 06:52:15 crc kubenswrapper[4817]: I0314 06:52:15.257532 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wn27" event={"ID":"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669","Type":"ContainerDied","Data":"a716696b59334481dacbe0d6baeca524f34f2f1cac7d2435e21ac34699992ca4"} Mar 14 06:52:16 crc kubenswrapper[4817]: I0314 06:52:16.268920 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wn27" event={"ID":"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669","Type":"ContainerStarted","Data":"1a0bca4f689e03988077716152cf20cf18a89277715c6dce70ea5103ff928561"} Mar 14 06:52:16 crc kubenswrapper[4817]: I0314 06:52:16.326971 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6wn27" podStartSLOduration=2.68861351 podStartE2EDuration="5.326935373s" podCreationTimestamp="2026-03-14 06:52:11 +0000 UTC" firstStartedPulling="2026-03-14 06:52:13.240436783 +0000 UTC m=+4787.278697569" lastFinishedPulling="2026-03-14 06:52:15.878758646 +0000 UTC m=+4789.917019432" observedRunningTime="2026-03-14 06:52:16.309547707 +0000 UTC m=+4790.347808473" watchObservedRunningTime="2026-03-14 06:52:16.326935373 +0000 UTC m=+4790.365196159" Mar 14 06:52:16 crc kubenswrapper[4817]: I0314 06:52:16.738507 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:52:17 crc kubenswrapper[4817]: I0314 06:52:17.287744 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"a981ac978b9b864b85dc9598c79af70065a145631ca5c16d368284a9ee5e7be8"} Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.543296 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.544192 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.598605 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.847799 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7rgld/must-gather-f8j2n"] Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.850234 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.857583 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7rgld"/"kube-root-ca.crt" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.857912 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7rgld"/"openshift-service-ca.crt" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.858113 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7rgld"/"default-dockercfg-9974b" Mar 14 06:52:21 crc kubenswrapper[4817]: I0314 06:52:21.871040 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7rgld/must-gather-f8j2n"] Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.047288 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be945528-09ce-4135-b245-447831cdc494-must-gather-output\") pod \"must-gather-f8j2n\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.047352 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmc8w\" (UniqueName: \"kubernetes.io/projected/be945528-09ce-4135-b245-447831cdc494-kube-api-access-tmc8w\") pod \"must-gather-f8j2n\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.148829 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be945528-09ce-4135-b245-447831cdc494-must-gather-output\") pod \"must-gather-f8j2n\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.149246 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmc8w\" (UniqueName: \"kubernetes.io/projected/be945528-09ce-4135-b245-447831cdc494-kube-api-access-tmc8w\") pod \"must-gather-f8j2n\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.149561 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be945528-09ce-4135-b245-447831cdc494-must-gather-output\") pod \"must-gather-f8j2n\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.648884 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmc8w\" (UniqueName: \"kubernetes.io/projected/be945528-09ce-4135-b245-447831cdc494-kube-api-access-tmc8w\") pod \"must-gather-f8j2n\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.689791 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.742225 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wn27"] Mar 14 06:52:22 crc kubenswrapper[4817]: I0314 06:52:22.787060 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:52:23 crc kubenswrapper[4817]: I0314 06:52:23.098672 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7rgld/must-gather-f8j2n"] Mar 14 06:52:23 crc kubenswrapper[4817]: W0314 06:52:23.099054 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe945528_09ce_4135_b245_447831cdc494.slice/crio-01872885d3f849bb8faeb3549d7c7a26f97313bd907545ed99dbd6d3c0c225e0 WatchSource:0}: Error finding container 01872885d3f849bb8faeb3549d7c7a26f97313bd907545ed99dbd6d3c0c225e0: Status 404 returned error can't find the container with id 01872885d3f849bb8faeb3549d7c7a26f97313bd907545ed99dbd6d3c0c225e0 Mar 14 06:52:23 crc kubenswrapper[4817]: I0314 06:52:23.343374 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/must-gather-f8j2n" event={"ID":"be945528-09ce-4135-b245-447831cdc494","Type":"ContainerStarted","Data":"b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e"} Mar 14 06:52:23 crc kubenswrapper[4817]: I0314 06:52:23.343422 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/must-gather-f8j2n" event={"ID":"be945528-09ce-4135-b245-447831cdc494","Type":"ContainerStarted","Data":"01872885d3f849bb8faeb3549d7c7a26f97313bd907545ed99dbd6d3c0c225e0"} Mar 14 06:52:24 crc kubenswrapper[4817]: I0314 06:52:24.354941 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6wn27" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="registry-server" containerID="cri-o://1a0bca4f689e03988077716152cf20cf18a89277715c6dce70ea5103ff928561" gracePeriod=2 Mar 14 06:52:24 crc kubenswrapper[4817]: I0314 06:52:24.356522 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/must-gather-f8j2n" event={"ID":"be945528-09ce-4135-b245-447831cdc494","Type":"ContainerStarted","Data":"12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b"} Mar 14 06:52:24 crc kubenswrapper[4817]: I0314 06:52:24.381300 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7rgld/must-gather-f8j2n" podStartSLOduration=3.381280562 podStartE2EDuration="3.381280562s" podCreationTimestamp="2026-03-14 06:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:52:24.374153029 +0000 UTC m=+4798.412413775" watchObservedRunningTime="2026-03-14 06:52:24.381280562 +0000 UTC m=+4798.419541298" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.371091 4817 generic.go:334] "Generic (PLEG): container finished" podID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerID="1a0bca4f689e03988077716152cf20cf18a89277715c6dce70ea5103ff928561" exitCode=0 Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.371627 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wn27" event={"ID":"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669","Type":"ContainerDied","Data":"1a0bca4f689e03988077716152cf20cf18a89277715c6dce70ea5103ff928561"} Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.545474 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.724527 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-utilities\") pod \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.724614 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-catalog-content\") pod \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.724768 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jzrl\" (UniqueName: \"kubernetes.io/projected/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-kube-api-access-5jzrl\") pod \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\" (UID: \"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669\") " Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.725814 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-utilities" (OuterVolumeSpecName: "utilities") pod "e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" (UID: "e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.731062 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-kube-api-access-5jzrl" (OuterVolumeSpecName: "kube-api-access-5jzrl") pod "e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" (UID: "e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669"). InnerVolumeSpecName "kube-api-access-5jzrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.757010 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" (UID: "e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.826874 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jzrl\" (UniqueName: \"kubernetes.io/projected/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-kube-api-access-5jzrl\") on node \"crc\" DevicePath \"\"" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.826923 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:52:25 crc kubenswrapper[4817]: I0314 06:52:25.826933 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.381366 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wn27" event={"ID":"e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669","Type":"ContainerDied","Data":"d89184d77bd8e66fa11c767d6f0513ae82554619141a2fb5b8e5f74414f7f7f0"} Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.381724 4817 scope.go:117] "RemoveContainer" containerID="1a0bca4f689e03988077716152cf20cf18a89277715c6dce70ea5103ff928561" Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.381420 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wn27" Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.413419 4817 scope.go:117] "RemoveContainer" containerID="a716696b59334481dacbe0d6baeca524f34f2f1cac7d2435e21ac34699992ca4" Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.426056 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wn27"] Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.435453 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wn27"] Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.442105 4817 scope.go:117] "RemoveContainer" containerID="2324bfb7856fd50c5acc9e0c93056beec61801bffcd400cd7fde31a5f1b45bac" Mar 14 06:52:26 crc kubenswrapper[4817]: I0314 06:52:26.748803 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" path="/var/lib/kubelet/pods/e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669/volumes" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.415578 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7rgld/crc-debug-cx55b"] Mar 14 06:52:27 crc kubenswrapper[4817]: E0314 06:52:27.416421 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="registry-server" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.416437 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="registry-server" Mar 14 06:52:27 crc kubenswrapper[4817]: E0314 06:52:27.416454 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="extract-utilities" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.416461 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="extract-utilities" Mar 14 06:52:27 crc kubenswrapper[4817]: E0314 06:52:27.416486 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="extract-content" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.416495 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="extract-content" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.416999 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ff5dd9-76a5-4b2c-a5f7-e51923d7c669" containerName="registry-server" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.417842 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.557751 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a12fb4d-4540-4638-bc81-d9970e8fda99-host\") pod \"crc-debug-cx55b\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.557840 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv9sj\" (UniqueName: \"kubernetes.io/projected/2a12fb4d-4540-4638-bc81-d9970e8fda99-kube-api-access-mv9sj\") pod \"crc-debug-cx55b\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.659173 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a12fb4d-4540-4638-bc81-d9970e8fda99-host\") pod \"crc-debug-cx55b\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.659761 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv9sj\" (UniqueName: \"kubernetes.io/projected/2a12fb4d-4540-4638-bc81-d9970e8fda99-kube-api-access-mv9sj\") pod \"crc-debug-cx55b\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.659684 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a12fb4d-4540-4638-bc81-d9970e8fda99-host\") pod \"crc-debug-cx55b\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.687994 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv9sj\" (UniqueName: \"kubernetes.io/projected/2a12fb4d-4540-4638-bc81-d9970e8fda99-kube-api-access-mv9sj\") pod \"crc-debug-cx55b\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: I0314 06:52:27.741500 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:52:27 crc kubenswrapper[4817]: W0314 06:52:27.775025 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a12fb4d_4540_4638_bc81_d9970e8fda99.slice/crio-8fc3899dffa5a2558cad72e03b5e40d4a828adc6d97127a41590cd28dc126512 WatchSource:0}: Error finding container 8fc3899dffa5a2558cad72e03b5e40d4a828adc6d97127a41590cd28dc126512: Status 404 returned error can't find the container with id 8fc3899dffa5a2558cad72e03b5e40d4a828adc6d97127a41590cd28dc126512 Mar 14 06:52:28 crc kubenswrapper[4817]: I0314 06:52:28.399812 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-cx55b" event={"ID":"2a12fb4d-4540-4638-bc81-d9970e8fda99","Type":"ContainerStarted","Data":"330a1c9ed8afee04376b883c1cf14a71dd40044a72a93e0bd1cc0095c2a3f84d"} Mar 14 06:52:28 crc kubenswrapper[4817]: I0314 06:52:28.400290 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-cx55b" event={"ID":"2a12fb4d-4540-4638-bc81-d9970e8fda99","Type":"ContainerStarted","Data":"8fc3899dffa5a2558cad72e03b5e40d4a828adc6d97127a41590cd28dc126512"} Mar 14 06:52:28 crc kubenswrapper[4817]: I0314 06:52:28.422979 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7rgld/crc-debug-cx55b" podStartSLOduration=1.42295572 podStartE2EDuration="1.42295572s" podCreationTimestamp="2026-03-14 06:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 06:52:28.41452269 +0000 UTC m=+4802.452783436" watchObservedRunningTime="2026-03-14 06:52:28.42295572 +0000 UTC m=+4802.461216466" Mar 14 06:52:48 crc kubenswrapper[4817]: I0314 06:52:48.380677 4817 scope.go:117] "RemoveContainer" containerID="fadf4bab782b639e9e0c55d7f9e65a374bb1a58b80b054d53b7970408d2a9e26" Mar 14 06:53:06 crc kubenswrapper[4817]: I0314 06:53:06.742367 4817 generic.go:334] "Generic (PLEG): container finished" podID="2a12fb4d-4540-4638-bc81-d9970e8fda99" containerID="330a1c9ed8afee04376b883c1cf14a71dd40044a72a93e0bd1cc0095c2a3f84d" exitCode=0 Mar 14 06:53:06 crc kubenswrapper[4817]: I0314 06:53:06.747163 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-cx55b" event={"ID":"2a12fb4d-4540-4638-bc81-d9970e8fda99","Type":"ContainerDied","Data":"330a1c9ed8afee04376b883c1cf14a71dd40044a72a93e0bd1cc0095c2a3f84d"} Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.865568 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.900556 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7rgld/crc-debug-cx55b"] Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.908524 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7rgld/crc-debug-cx55b"] Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.970288 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv9sj\" (UniqueName: \"kubernetes.io/projected/2a12fb4d-4540-4638-bc81-d9970e8fda99-kube-api-access-mv9sj\") pod \"2a12fb4d-4540-4638-bc81-d9970e8fda99\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.970444 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a12fb4d-4540-4638-bc81-d9970e8fda99-host\") pod \"2a12fb4d-4540-4638-bc81-d9970e8fda99\" (UID: \"2a12fb4d-4540-4638-bc81-d9970e8fda99\") " Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.970560 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a12fb4d-4540-4638-bc81-d9970e8fda99-host" (OuterVolumeSpecName: "host") pod "2a12fb4d-4540-4638-bc81-d9970e8fda99" (UID: "2a12fb4d-4540-4638-bc81-d9970e8fda99"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.970995 4817 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a12fb4d-4540-4638-bc81-d9970e8fda99-host\") on node \"crc\" DevicePath \"\"" Mar 14 06:53:07 crc kubenswrapper[4817]: I0314 06:53:07.985957 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a12fb4d-4540-4638-bc81-d9970e8fda99-kube-api-access-mv9sj" (OuterVolumeSpecName: "kube-api-access-mv9sj") pod "2a12fb4d-4540-4638-bc81-d9970e8fda99" (UID: "2a12fb4d-4540-4638-bc81-d9970e8fda99"). InnerVolumeSpecName "kube-api-access-mv9sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:53:08 crc kubenswrapper[4817]: I0314 06:53:08.072796 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv9sj\" (UniqueName: \"kubernetes.io/projected/2a12fb4d-4540-4638-bc81-d9970e8fda99-kube-api-access-mv9sj\") on node \"crc\" DevicePath \"\"" Mar 14 06:53:08 crc kubenswrapper[4817]: I0314 06:53:08.744797 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a12fb4d-4540-4638-bc81-d9970e8fda99" path="/var/lib/kubelet/pods/2a12fb4d-4540-4638-bc81-d9970e8fda99/volumes" Mar 14 06:53:08 crc kubenswrapper[4817]: I0314 06:53:08.763955 4817 scope.go:117] "RemoveContainer" containerID="330a1c9ed8afee04376b883c1cf14a71dd40044a72a93e0bd1cc0095c2a3f84d" Mar 14 06:53:08 crc kubenswrapper[4817]: I0314 06:53:08.763994 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-cx55b" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.160551 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7rgld/crc-debug-kzd75"] Mar 14 06:53:09 crc kubenswrapper[4817]: E0314 06:53:09.161680 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a12fb4d-4540-4638-bc81-d9970e8fda99" containerName="container-00" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.161700 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a12fb4d-4540-4638-bc81-d9970e8fda99" containerName="container-00" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.162223 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a12fb4d-4540-4638-bc81-d9970e8fda99" containerName="container-00" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.163251 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.196076 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-host\") pod \"crc-debug-kzd75\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.196200 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlhw\" (UniqueName: \"kubernetes.io/projected/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-kube-api-access-mtlhw\") pod \"crc-debug-kzd75\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.298268 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlhw\" (UniqueName: \"kubernetes.io/projected/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-kube-api-access-mtlhw\") pod \"crc-debug-kzd75\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.298464 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-host\") pod \"crc-debug-kzd75\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.298662 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-host\") pod \"crc-debug-kzd75\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.323973 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlhw\" (UniqueName: \"kubernetes.io/projected/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-kube-api-access-mtlhw\") pod \"crc-debug-kzd75\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.494986 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:09 crc kubenswrapper[4817]: I0314 06:53:09.776334 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-kzd75" event={"ID":"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce","Type":"ContainerStarted","Data":"afa6f71ce235a2e407b05b3b2d276f4522a50bd660bf4c2652a4e432e4bca31b"} Mar 14 06:53:10 crc kubenswrapper[4817]: I0314 06:53:10.791049 4817 generic.go:334] "Generic (PLEG): container finished" podID="6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" containerID="ce985d107a4b187fde3f247a1c9847feb2640cc1c585e6c65d7f56c122e62629" exitCode=0 Mar 14 06:53:10 crc kubenswrapper[4817]: I0314 06:53:10.791161 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-kzd75" event={"ID":"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce","Type":"ContainerDied","Data":"ce985d107a4b187fde3f247a1c9847feb2640cc1c585e6c65d7f56c122e62629"} Mar 14 06:53:11 crc kubenswrapper[4817]: I0314 06:53:11.899792 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:11 crc kubenswrapper[4817]: I0314 06:53:11.945724 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-host\") pod \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " Mar 14 06:53:11 crc kubenswrapper[4817]: I0314 06:53:11.945812 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtlhw\" (UniqueName: \"kubernetes.io/projected/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-kube-api-access-mtlhw\") pod \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\" (UID: \"6d19b7ce-be04-4575-bc38-8f6ff5fb55ce\") " Mar 14 06:53:11 crc kubenswrapper[4817]: I0314 06:53:11.946292 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-host" (OuterVolumeSpecName: "host") pod "6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" (UID: "6d19b7ce-be04-4575-bc38-8f6ff5fb55ce"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:53:11 crc kubenswrapper[4817]: I0314 06:53:11.946622 4817 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-host\") on node \"crc\" DevicePath \"\"" Mar 14 06:53:11 crc kubenswrapper[4817]: I0314 06:53:11.956032 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-kube-api-access-mtlhw" (OuterVolumeSpecName: "kube-api-access-mtlhw") pod "6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" (UID: "6d19b7ce-be04-4575-bc38-8f6ff5fb55ce"). InnerVolumeSpecName "kube-api-access-mtlhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:53:12 crc kubenswrapper[4817]: I0314 06:53:12.048084 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtlhw\" (UniqueName: \"kubernetes.io/projected/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce-kube-api-access-mtlhw\") on node \"crc\" DevicePath \"\"" Mar 14 06:53:12 crc kubenswrapper[4817]: I0314 06:53:12.355187 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7rgld/crc-debug-kzd75"] Mar 14 06:53:12 crc kubenswrapper[4817]: I0314 06:53:12.368176 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7rgld/crc-debug-kzd75"] Mar 14 06:53:12 crc kubenswrapper[4817]: I0314 06:53:12.775915 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" path="/var/lib/kubelet/pods/6d19b7ce-be04-4575-bc38-8f6ff5fb55ce/volumes" Mar 14 06:53:12 crc kubenswrapper[4817]: I0314 06:53:12.810203 4817 scope.go:117] "RemoveContainer" containerID="ce985d107a4b187fde3f247a1c9847feb2640cc1c585e6c65d7f56c122e62629" Mar 14 06:53:12 crc kubenswrapper[4817]: I0314 06:53:12.810321 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-kzd75" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.737515 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7rgld/crc-debug-xfftt"] Mar 14 06:53:13 crc kubenswrapper[4817]: E0314 06:53:13.738004 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" containerName="container-00" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.738022 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" containerName="container-00" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.738248 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d19b7ce-be04-4575-bc38-8f6ff5fb55ce" containerName="container-00" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.738971 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.796468 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfffb137-f5b2-44c1-84af-2ec2582763c0-host\") pod \"crc-debug-xfftt\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.796722 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68r2\" (UniqueName: \"kubernetes.io/projected/bfffb137-f5b2-44c1-84af-2ec2582763c0-kube-api-access-q68r2\") pod \"crc-debug-xfftt\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.900268 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfffb137-f5b2-44c1-84af-2ec2582763c0-host\") pod \"crc-debug-xfftt\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.900372 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68r2\" (UniqueName: \"kubernetes.io/projected/bfffb137-f5b2-44c1-84af-2ec2582763c0-kube-api-access-q68r2\") pod \"crc-debug-xfftt\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.900463 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfffb137-f5b2-44c1-84af-2ec2582763c0-host\") pod \"crc-debug-xfftt\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:13 crc kubenswrapper[4817]: I0314 06:53:13.928735 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68r2\" (UniqueName: \"kubernetes.io/projected/bfffb137-f5b2-44c1-84af-2ec2582763c0-kube-api-access-q68r2\") pod \"crc-debug-xfftt\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:14 crc kubenswrapper[4817]: I0314 06:53:14.057979 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:14 crc kubenswrapper[4817]: W0314 06:53:14.083278 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfffb137_f5b2_44c1_84af_2ec2582763c0.slice/crio-ec1b3ad5649b6f2eec6ea52fb766b24b0178f8b16e894da3122afd2b0d342fdc WatchSource:0}: Error finding container ec1b3ad5649b6f2eec6ea52fb766b24b0178f8b16e894da3122afd2b0d342fdc: Status 404 returned error can't find the container with id ec1b3ad5649b6f2eec6ea52fb766b24b0178f8b16e894da3122afd2b0d342fdc Mar 14 06:53:14 crc kubenswrapper[4817]: I0314 06:53:14.833374 4817 generic.go:334] "Generic (PLEG): container finished" podID="bfffb137-f5b2-44c1-84af-2ec2582763c0" containerID="dde6be353c2820ced6187138521d4e3152a91f6cb6595573634e7e6b246e50ae" exitCode=0 Mar 14 06:53:14 crc kubenswrapper[4817]: I0314 06:53:14.833554 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-xfftt" event={"ID":"bfffb137-f5b2-44c1-84af-2ec2582763c0","Type":"ContainerDied","Data":"dde6be353c2820ced6187138521d4e3152a91f6cb6595573634e7e6b246e50ae"} Mar 14 06:53:14 crc kubenswrapper[4817]: I0314 06:53:14.833740 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/crc-debug-xfftt" event={"ID":"bfffb137-f5b2-44c1-84af-2ec2582763c0","Type":"ContainerStarted","Data":"ec1b3ad5649b6f2eec6ea52fb766b24b0178f8b16e894da3122afd2b0d342fdc"} Mar 14 06:53:14 crc kubenswrapper[4817]: I0314 06:53:14.882503 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7rgld/crc-debug-xfftt"] Mar 14 06:53:14 crc kubenswrapper[4817]: I0314 06:53:14.896482 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7rgld/crc-debug-xfftt"] Mar 14 06:53:15 crc kubenswrapper[4817]: I0314 06:53:15.964546 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.038126 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfffb137-f5b2-44c1-84af-2ec2582763c0-host\") pod \"bfffb137-f5b2-44c1-84af-2ec2582763c0\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.038297 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q68r2\" (UniqueName: \"kubernetes.io/projected/bfffb137-f5b2-44c1-84af-2ec2582763c0-kube-api-access-q68r2\") pod \"bfffb137-f5b2-44c1-84af-2ec2582763c0\" (UID: \"bfffb137-f5b2-44c1-84af-2ec2582763c0\") " Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.038285 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfffb137-f5b2-44c1-84af-2ec2582763c0-host" (OuterVolumeSpecName: "host") pod "bfffb137-f5b2-44c1-84af-2ec2582763c0" (UID: "bfffb137-f5b2-44c1-84af-2ec2582763c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.039111 4817 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfffb137-f5b2-44c1-84af-2ec2582763c0-host\") on node \"crc\" DevicePath \"\"" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.047566 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfffb137-f5b2-44c1-84af-2ec2582763c0-kube-api-access-q68r2" (OuterVolumeSpecName: "kube-api-access-q68r2") pod "bfffb137-f5b2-44c1-84af-2ec2582763c0" (UID: "bfffb137-f5b2-44c1-84af-2ec2582763c0"). InnerVolumeSpecName "kube-api-access-q68r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.141124 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q68r2\" (UniqueName: \"kubernetes.io/projected/bfffb137-f5b2-44c1-84af-2ec2582763c0-kube-api-access-q68r2\") on node \"crc\" DevicePath \"\"" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.770554 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfffb137-f5b2-44c1-84af-2ec2582763c0" path="/var/lib/kubelet/pods/bfffb137-f5b2-44c1-84af-2ec2582763c0/volumes" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.855991 4817 scope.go:117] "RemoveContainer" containerID="dde6be353c2820ced6187138521d4e3152a91f6cb6595573634e7e6b246e50ae" Mar 14 06:53:16 crc kubenswrapper[4817]: I0314 06:53:16.856045 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/crc-debug-xfftt" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.150390 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557854-j5qtb"] Mar 14 06:54:00 crc kubenswrapper[4817]: E0314 06:54:00.151462 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfffb137-f5b2-44c1-84af-2ec2582763c0" containerName="container-00" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.151477 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfffb137-f5b2-44c1-84af-2ec2582763c0" containerName="container-00" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.151707 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfffb137-f5b2-44c1-84af-2ec2582763c0" containerName="container-00" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.152600 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.158817 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.159183 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.159508 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.191495 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557854-j5qtb"] Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.267369 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-565cfb5466-k8v6z_af06e777-9e2e-437e-a013-cd5e83735ac0/barbican-api/0.log" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.289204 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmbnq\" (UniqueName: \"kubernetes.io/projected/7c63f9c2-1e95-432e-9fb3-1313834da13d-kube-api-access-lmbnq\") pod \"auto-csr-approver-29557854-j5qtb\" (UID: \"7c63f9c2-1e95-432e-9fb3-1313834da13d\") " pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.345080 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-565cfb5466-k8v6z_af06e777-9e2e-437e-a013-cd5e83735ac0/barbican-api-log/0.log" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.391314 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmbnq\" (UniqueName: \"kubernetes.io/projected/7c63f9c2-1e95-432e-9fb3-1313834da13d-kube-api-access-lmbnq\") pod \"auto-csr-approver-29557854-j5qtb\" (UID: \"7c63f9c2-1e95-432e-9fb3-1313834da13d\") " pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.415622 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmbnq\" (UniqueName: \"kubernetes.io/projected/7c63f9c2-1e95-432e-9fb3-1313834da13d-kube-api-access-lmbnq\") pod \"auto-csr-approver-29557854-j5qtb\" (UID: \"7c63f9c2-1e95-432e-9fb3-1313834da13d\") " pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.435191 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7556b9f8-gtmkg_cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0/barbican-keystone-listener/0.log" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.480113 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.710857 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-658f9b4fd7-k22b5_a737974d-6611-4a56-9bbb-27256380ae54/barbican-worker/0.log" Mar 14 06:54:00 crc kubenswrapper[4817]: I0314 06:54:00.757915 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b7556b9f8-gtmkg_cc8e6f7b-b6a9-40e1-b71d-e61e17c72ee0/barbican-keystone-listener-log/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:00.809545 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-658f9b4fd7-k22b5_a737974d-6611-4a56-9bbb-27256380ae54/barbican-worker-log/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:00.952781 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dvf4c_03575d81-89e3-4d1a-a27a-5aad81319453/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:00.980617 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557854-j5qtb"] Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:00.989264 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.061755 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/ceilometer-central-agent/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.155659 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/ceilometer-notification-agent/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.186098 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/proxy-httpd/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.232121 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edf54d5a-1b48-43ca-a621-e815ccf42e59/sg-core/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.302397 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" event={"ID":"7c63f9c2-1e95-432e-9fb3-1313834da13d","Type":"ContainerStarted","Data":"5f52f8bd0f67aafb2722f1fa4b5a1710b420f295112561ee89d0a53b8b175a27"} Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.807289 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-knzng_b24eeb13-77b4-4662-90f0-933ae091cfe2/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:01 crc kubenswrapper[4817]: I0314 06:54:01.966477 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skrbn_bfe316ac-01fd-4838-b92a-7899469d769f/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.202475 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_937e0c39-f135-482a-b4f8-388fbd9a11bd/cinder-api-log/0.log" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.221139 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_937e0c39-f135-482a-b4f8-388fbd9a11bd/cinder-api/0.log" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.370065 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" event={"ID":"7c63f9c2-1e95-432e-9fb3-1313834da13d","Type":"ContainerStarted","Data":"2f11abb370d8224d777970aa6dc4799a65665ebdcb65168eff74ddf9ddc7ee9b"} Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.395187 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" podStartSLOduration=1.568106227 podStartE2EDuration="2.395165956s" podCreationTimestamp="2026-03-14 06:54:00 +0000 UTC" firstStartedPulling="2026-03-14 06:54:00.989048822 +0000 UTC m=+4895.027309568" lastFinishedPulling="2026-03-14 06:54:01.816108541 +0000 UTC m=+4895.854369297" observedRunningTime="2026-03-14 06:54:02.388861696 +0000 UTC m=+4896.427122442" watchObservedRunningTime="2026-03-14 06:54:02.395165956 +0000 UTC m=+4896.433426702" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.734763 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1b031afc-6d59-484d-8490-f684bbad769f/probe/0.log" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.804769 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a37bd39-17a6-4c93-8146-b694d6e30b37/cinder-scheduler/0.log" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.841381 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_1b031afc-6d59-484d-8490-f684bbad769f/cinder-backup/0.log" Mar 14 06:54:02 crc kubenswrapper[4817]: I0314 06:54:02.952088 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2a37bd39-17a6-4c93-8146-b694d6e30b37/probe/0.log" Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.035621 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0f16a837-b3ad-4283-bd8e-19512d545253/probe/0.log" Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.111531 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0f16a837-b3ad-4283-bd8e-19512d545253/cinder-volume/0.log" Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.243553 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ndntk_7c8f94cd-c90d-40df-af0a-88ddf4730cbc/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.382748 4817 generic.go:334] "Generic (PLEG): container finished" podID="7c63f9c2-1e95-432e-9fb3-1313834da13d" containerID="2f11abb370d8224d777970aa6dc4799a65665ebdcb65168eff74ddf9ddc7ee9b" exitCode=0 Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.382801 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" event={"ID":"7c63f9c2-1e95-432e-9fb3-1313834da13d","Type":"ContainerDied","Data":"2f11abb370d8224d777970aa6dc4799a65665ebdcb65168eff74ddf9ddc7ee9b"} Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.405307 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-8ksxw_c6d0efb4-1d62-46b6-8ecd-974f9d0ff31a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:03 crc kubenswrapper[4817]: I0314 06:54:03.907369 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-rc9hl_5405820a-1727-4506-aeef-6c081ec11d88/init/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.070557 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-rc9hl_5405820a-1727-4506-aeef-6c081ec11d88/init/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.125207 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7b55b75-81af-4e71-8710-7b050784fa23/glance-httpd/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.135963 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-rc9hl_5405820a-1727-4506-aeef-6c081ec11d88/dnsmasq-dns/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.173654 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7b55b75-81af-4e71-8710-7b050784fa23/glance-log/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.326189 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ebec1c57-27fe-4039-acbc-ddfb06dede94/glance-log/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.331018 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ebec1c57-27fe-4039-acbc-ddfb06dede94/glance-httpd/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.488935 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-767cf48f8d-lxbdx_abfb19f7-bac6-45a5-953e-546d46435171/horizon/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.661358 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-m9mk6_273c6104-8daf-4e5e-b87e-aaf48ee8ae1f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.758745 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.760213 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-767cf48f8d-lxbdx_abfb19f7-bac6-45a5-953e-546d46435171/horizon-log/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.776860 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pr57t_d6d2380c-071b-413a-a854-7b25ae09401a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.897503 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmbnq\" (UniqueName: \"kubernetes.io/projected/7c63f9c2-1e95-432e-9fb3-1313834da13d-kube-api-access-lmbnq\") pod \"7c63f9c2-1e95-432e-9fb3-1313834da13d\" (UID: \"7c63f9c2-1e95-432e-9fb3-1313834da13d\") " Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.905928 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c63f9c2-1e95-432e-9fb3-1313834da13d-kube-api-access-lmbnq" (OuterVolumeSpecName: "kube-api-access-lmbnq") pod "7c63f9c2-1e95-432e-9fb3-1313834da13d" (UID: "7c63f9c2-1e95-432e-9fb3-1313834da13d"). InnerVolumeSpecName "kube-api-access-lmbnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:54:04 crc kubenswrapper[4817]: I0314 06:54:04.999667 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmbnq\" (UniqueName: \"kubernetes.io/projected/7c63f9c2-1e95-432e-9fb3-1313834da13d-kube-api-access-lmbnq\") on node \"crc\" DevicePath \"\"" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.033020 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29557801-4vr2l_3f78210b-4544-4dc2-8e6e-a873af162323/keystone-cron/0.log" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.236081 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_87a2e6ce-41f8-473e-ba22-d038bbef1de2/kube-state-metrics/0.log" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.364760 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dzgtn_921c813c-e71f-4a7b-b74c-a389c71e1d4f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.403594 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" event={"ID":"7c63f9c2-1e95-432e-9fb3-1313834da13d","Type":"ContainerDied","Data":"5f52f8bd0f67aafb2722f1fa4b5a1710b420f295112561ee89d0a53b8b175a27"} Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.403637 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f52f8bd0f67aafb2722f1fa4b5a1710b420f295112561ee89d0a53b8b175a27" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.403691 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557854-j5qtb" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.478333 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557848-lfg92"] Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.486076 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557848-lfg92"] Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.808419 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dc6f976cd-xr97w_bfd749ee-b04f-45eb-8a54-2594c1d4378f/keystone-api/0.log" Mar 14 06:54:05 crc kubenswrapper[4817]: I0314 06:54:05.898567 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ea6cf2e0-1f09-4e7b-8e73-21363bcad511/probe/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.041315 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ea6cf2e0-1f09-4e7b-8e73-21363bcad511/manila-scheduler/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.148619 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f23f27ea-4a4a-44ca-9b1c-457e8e4e397a/manila-api/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.208742 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59850886-78b9-425e-895a-4a0f48438dbd/probe/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.378738 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_59850886-78b9-425e-895a-4a0f48438dbd/manila-share/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.527785 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_f23f27ea-4a4a-44ca-9b1c-457e8e4e397a/manila-api-log/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.620398 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-549548697-x46rl_ea9ff5c7-bf16-488f-8289-cbc134c9416e/neutron-api/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.665584 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-549548697-x46rl_ea9ff5c7-bf16-488f-8289-cbc134c9416e/neutron-httpd/0.log" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.745616 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8" path="/var/lib/kubelet/pods/d7a3d7e7-0564-4506-a9bf-f39b9c0a9bd8/volumes" Mar 14 06:54:06 crc kubenswrapper[4817]: I0314 06:54:06.763922 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f5d2s_87c92ad1-a3e6-4c1c-acf5-d58ba63e2b91/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:07 crc kubenswrapper[4817]: I0314 06:54:07.093414 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd4cb5c8-0bac-4213-bc4d-42805d1b03f7/nova-api-log/0.log" Mar 14 06:54:07 crc kubenswrapper[4817]: I0314 06:54:07.242625 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4ffda2e9-2972-4633-aa0f-207e8095c237/nova-cell0-conductor-conductor/0.log" Mar 14 06:54:07 crc kubenswrapper[4817]: I0314 06:54:07.546004 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5af4f1fd-b11b-42bd-ba99-0a9658136ea0/nova-cell1-conductor-conductor/0.log" Mar 14 06:54:07 crc kubenswrapper[4817]: I0314 06:54:07.699358 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_acdb327b-4e5c-4ea0-bf01-a46f9e0034b0/nova-cell1-novncproxy-novncproxy/0.log" Mar 14 06:54:07 crc kubenswrapper[4817]: I0314 06:54:07.718325 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd4cb5c8-0bac-4213-bc4d-42805d1b03f7/nova-api-api/0.log" Mar 14 06:54:07 crc kubenswrapper[4817]: I0314 06:54:07.784478 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4d994_78a366f5-7ad6-43e0-be63-4c63cf2b21e8/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.056802 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d905ec00-43c0-4f8b-a52d-414b74697fb2/nova-metadata-log/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.339188 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29e96de0-75d9-4da0-a41e-3b93a7274083/mysql-bootstrap/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.463705 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e65fa238-9d14-4be7-ae7b-0b3eb077d575/nova-scheduler-scheduler/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.546620 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29e96de0-75d9-4da0-a41e-3b93a7274083/mysql-bootstrap/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.558883 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_29e96de0-75d9-4da0-a41e-3b93a7274083/galera/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.577279 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d905ec00-43c0-4f8b-a52d-414b74697fb2/nova-metadata-metadata/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.807801 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_099647fc-5cd6-4547-9400-8df4b6016b50/mysql-bootstrap/0.log" Mar 14 06:54:08 crc kubenswrapper[4817]: I0314 06:54:08.993400 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_099647fc-5cd6-4547-9400-8df4b6016b50/mysql-bootstrap/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.005753 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_099647fc-5cd6-4547-9400-8df4b6016b50/galera/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.100875 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_30bca2b3-da0c-4f63-b9fb-95c742af358e/openstackclient/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.200802 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-944q4_e515005b-34d6-46cb-8486-cf2e09877f9d/openstack-network-exporter/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.361329 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nsn5q_9790d6d0-9013-42cf-bb3d-394f5fc292ba/ovn-controller/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.527457 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovsdb-server-init/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.739150 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovs-vswitchd/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.757691 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovsdb-server/0.log" Mar 14 06:54:09 crc kubenswrapper[4817]: I0314 06:54:09.788429 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rj9cw_900c734a-f840-4877-98fd-ff1415d6ad18/ovsdb-server-init/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.033832 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_094c63b8-a153-4ae4-90a5-d65b5718abd1/ovn-northd/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.083428 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_094c63b8-a153-4ae4-90a5-d65b5718abd1/openstack-network-exporter/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.092135 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-52cpk_dd55a087-6a2d-4515-9774-96d247a25d52/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.249028 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5d70fa19-d760-4bb0-b182-c1bcf7797f96/openstack-network-exporter/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.279936 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5d70fa19-d760-4bb0-b182-c1bcf7797f96/ovsdbserver-nb/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.505239 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_523f27ff-3994-4742-af55-15befc50017e/openstack-network-exporter/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.541729 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_523f27ff-3994-4742-af55-15befc50017e/ovsdbserver-sb/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.680536 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67dfb54788-qqrtk_5aad3d14-3e24-460a-b6b3-9508031f76d6/placement-api/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.799787 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67dfb54788-qqrtk_5aad3d14-3e24-460a-b6b3-9508031f76d6/placement-log/0.log" Mar 14 06:54:10 crc kubenswrapper[4817]: I0314 06:54:10.885627 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9ecd33b-91dc-44fc-932d-d962a7835af9/setup-container/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.102209 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da2757e1-ec61-4c1f-8060-3da273bd77cd/setup-container/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.120339 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9ecd33b-91dc-44fc-932d-d962a7835af9/rabbitmq/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.154386 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9ecd33b-91dc-44fc-932d-d962a7835af9/setup-container/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.305256 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da2757e1-ec61-4c1f-8060-3da273bd77cd/setup-container/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.397820 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da2757e1-ec61-4c1f-8060-3da273bd77cd/rabbitmq/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.500348 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bptrt_350b95d7-fdff-421f-bb13-2b9b307e0918/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.618277 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-76mst_b9ce930d-a273-4240-ad64-19c4d50a3ec6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.718204 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-h5rd5_9ccb7da0-02de-4f65-9b76-6c8c0a47a34e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:11 crc kubenswrapper[4817]: I0314 06:54:11.843856 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-g7866_9f8afb0b-9422-463a-86d7-9c59bcfac32f/ssh-known-hosts-edpm-deployment/0.log" Mar 14 06:54:12 crc kubenswrapper[4817]: I0314 06:54:12.021958 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_106079f9-3258-4c46-8ef4-1811c407fc69/tempest-tests-tempest-tests-runner/0.log" Mar 14 06:54:12 crc kubenswrapper[4817]: I0314 06:54:12.105591 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_068e1487-9973-4256-8646-2ef08528eeda/test-operator-logs-container/0.log" Mar 14 06:54:12 crc kubenswrapper[4817]: I0314 06:54:12.247666 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4m6kj_b73850a9-8701-4b80-8944-a762eaa7cf5e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 14 06:54:28 crc kubenswrapper[4817]: I0314 06:54:28.593963 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0fdabc42-d61c-48c0-b3f2-cf5d5f994e5e/memcached/0.log" Mar 14 06:54:38 crc kubenswrapper[4817]: I0314 06:54:38.565798 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:54:38 crc kubenswrapper[4817]: I0314 06:54:38.566614 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.262720 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/util/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.480840 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/util/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.528195 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/pull/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.533322 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/pull/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.678047 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/util/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.680271 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/pull/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.717151 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5e99c5c88341db4367ad06e00e941e081bed8abcd366bf1aad5092cc87fmt2l_aefb397f-8ba5-4680-9976-39dc26760fd7/extract/0.log" Mar 14 06:54:42 crc kubenswrapper[4817]: I0314 06:54:42.924236 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-d47688694-7cvhh_71a2aa8e-73f2-46c2-b8ad-2230259a3ede/manager/0.log" Mar 14 06:54:43 crc kubenswrapper[4817]: I0314 06:54:43.142923 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-c4p8n_c4dbd02a-f20d-4b34-93f7-cd8f7eb0ca50/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.002864 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-lssxv_78bda46b-797b-4cc4-9cf5-14a2bc692947/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.036732 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-qrpsc_0d8de2cd-0cc8-40b5-a549-7632e38e11a9/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.300491 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-9w9wj_2b1ddd08-cff6-4ec8-b701-77ad200ebd2f/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.570508 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bc894d9b-blx9z_895e1039-8354-4cb0-85d0-a0b2cc112db6/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.690912 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-vsjwb_f1029de4-c046-47b8-820b-113369bf590a/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.780512 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54dc5b8f8d-sbktt_15bddcc1-f479-4390-96f1-f0fd2cd43578/manager/0.log" Mar 14 06:54:44 crc kubenswrapper[4817]: I0314 06:54:44.826172 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-mxnmp_2d86f031-6e59-41cb-a7c1-cfe91c54630b/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.008372 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6b74cf5dc5-4tws4_6dc5b773-7e09-4d0c-b7fb-e73a398784dd/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.088472 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5b6b6b4c9f-g6xxf_3d2cc81d-8675-4b17-a429-c0a29be998d9/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.221771 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-gzl7j_4e484bfe-93ce-49cb-b687-2fd92d4a8b60/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.302945 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f84474648-kzrj9_caf0784a-accc-4973-8daf-7239f91eacb3/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.430838 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-xvl5j_7435e15d-0e12-4192-9725-59c501707754/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.528283 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7v78tk_2c529641-9582-4a57-b54d-f1f733f21a89/manager/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.646936 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bf7c47ddb-srdqq_32b598c5-f4cd-4c5d-9189-e8985b451ae2/operator/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.757033 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-686t2_e2e7b4b7-9377-4f51-92ff-8d8024a13484/registry-server/0.log" Mar 14 06:54:45 crc kubenswrapper[4817]: I0314 06:54:45.918638 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-75w5q_12489bc5-14ae-42cb-9717-341e479b9e53/manager/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.049767 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-hs4lc_5b65131d-d231-46ae-b5f9-95c9e4a0d69a/manager/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.197354 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6dknb_9d193754-974c-4c1a-a142-28fc5f109935/operator/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.296086 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7f9cc5dd44-nb94v_be9a0979-c4ed-471b-9cc9-c3dd753f106d/manager/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.474327 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6854b8b9d9-cr7p6_fa24c106-dfe2-4250-9b00-b063f21f0dcd/manager/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.575546 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-9j88f_762056c0-1243-4e41-87ea-242c1d082965/manager/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.752803 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-xrd4b_d0f348f1-96ec-46cd-bdd4-cccc5ad93f1c/manager/0.log" Mar 14 06:54:46 crc kubenswrapper[4817]: I0314 06:54:46.938802 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bb879dbb8-vlsfx_1351dc38-2b39-4e57-869d-1b430e900250/manager/0.log" Mar 14 06:54:48 crc kubenswrapper[4817]: I0314 06:54:48.561673 4817 scope.go:117] "RemoveContainer" containerID="f1305c7a640a97a873b799351d97dc105082383814f35c100feb0c4fc3de799a" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.793711 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2m7z9"] Mar 14 06:55:05 crc kubenswrapper[4817]: E0314 06:55:05.794701 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c63f9c2-1e95-432e-9fb3-1313834da13d" containerName="oc" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.794718 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c63f9c2-1e95-432e-9fb3-1313834da13d" containerName="oc" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.795003 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c63f9c2-1e95-432e-9fb3-1313834da13d" containerName="oc" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.796624 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.817836 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2m7z9"] Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.864608 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgsh7\" (UniqueName: \"kubernetes.io/projected/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-kube-api-access-cgsh7\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.864753 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-catalog-content\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.864941 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-utilities\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.966449 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-catalog-content\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.966585 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-utilities\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.966630 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgsh7\" (UniqueName: \"kubernetes.io/projected/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-kube-api-access-cgsh7\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.967223 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-catalog-content\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.967232 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-utilities\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:05 crc kubenswrapper[4817]: I0314 06:55:05.996166 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgsh7\" (UniqueName: \"kubernetes.io/projected/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-kube-api-access-cgsh7\") pod \"community-operators-2m7z9\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:06 crc kubenswrapper[4817]: I0314 06:55:06.121132 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:06 crc kubenswrapper[4817]: I0314 06:55:06.617491 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2m7z9"] Mar 14 06:55:06 crc kubenswrapper[4817]: I0314 06:55:06.937006 4817 generic.go:334] "Generic (PLEG): container finished" podID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerID="dcf38d52dd1c288410550463423c1478ca86c936aa396db7413a7dd4caf89bef" exitCode=0 Mar 14 06:55:06 crc kubenswrapper[4817]: I0314 06:55:06.937064 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerDied","Data":"dcf38d52dd1c288410550463423c1478ca86c936aa396db7413a7dd4caf89bef"} Mar 14 06:55:06 crc kubenswrapper[4817]: I0314 06:55:06.937285 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerStarted","Data":"20040f0cc8879a05dedb83f1e8b287d972bcde083baa420f5ea1d2ba70b72f8b"} Mar 14 06:55:07 crc kubenswrapper[4817]: I0314 06:55:07.223975 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mgpqw_5f8f4a28-ea66-4a2a-8cc8-ad845efd3266/control-plane-machine-set-operator/0.log" Mar 14 06:55:07 crc kubenswrapper[4817]: I0314 06:55:07.412175 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7zj7n_9f2c60cc-433e-4f50-8eca-2fb0ddd7982d/machine-api-operator/0.log" Mar 14 06:55:07 crc kubenswrapper[4817]: I0314 06:55:07.487580 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7zj7n_9f2c60cc-433e-4f50-8eca-2fb0ddd7982d/kube-rbac-proxy/0.log" Mar 14 06:55:07 crc kubenswrapper[4817]: I0314 06:55:07.947556 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerStarted","Data":"34697b545d3d4f048e63b32fe4f82c78343180cfd03678c719e969f05b27ff70"} Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.566083 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.566137 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.597925 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twvg9"] Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.600111 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.615420 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twvg9"] Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.716219 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52w4t\" (UniqueName: \"kubernetes.io/projected/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-kube-api-access-52w4t\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.716937 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-utilities\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.717118 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-catalog-content\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.819560 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52w4t\" (UniqueName: \"kubernetes.io/projected/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-kube-api-access-52w4t\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.819622 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-utilities\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.819743 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-catalog-content\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.820187 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-utilities\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.820239 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-catalog-content\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:08 crc kubenswrapper[4817]: I0314 06:55:08.950879 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52w4t\" (UniqueName: \"kubernetes.io/projected/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-kube-api-access-52w4t\") pod \"redhat-operators-twvg9\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.223098 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.717433 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twvg9"] Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.984244 4817 generic.go:334] "Generic (PLEG): container finished" podID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerID="34697b545d3d4f048e63b32fe4f82c78343180cfd03678c719e969f05b27ff70" exitCode=0 Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.984303 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerDied","Data":"34697b545d3d4f048e63b32fe4f82c78343180cfd03678c719e969f05b27ff70"} Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.986054 4817 generic.go:334] "Generic (PLEG): container finished" podID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerID="253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b" exitCode=0 Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.986076 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerDied","Data":"253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b"} Mar 14 06:55:09 crc kubenswrapper[4817]: I0314 06:55:09.986092 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerStarted","Data":"b5c5c256a34252fcd12c7afd0c90d8c5c0522f778931f180e1f6ac1c9139c3a6"} Mar 14 06:55:12 crc kubenswrapper[4817]: I0314 06:55:12.045253 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerStarted","Data":"6efd8b96f64ce24549e63a67ab83339a3b918a284d57376e455327e79c9da821"} Mar 14 06:55:12 crc kubenswrapper[4817]: I0314 06:55:12.050284 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerStarted","Data":"9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8"} Mar 14 06:55:12 crc kubenswrapper[4817]: I0314 06:55:12.080605 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2m7z9" podStartSLOduration=3.498755027 podStartE2EDuration="7.080581486s" podCreationTimestamp="2026-03-14 06:55:05 +0000 UTC" firstStartedPulling="2026-03-14 06:55:06.938657668 +0000 UTC m=+4960.976918414" lastFinishedPulling="2026-03-14 06:55:10.520484127 +0000 UTC m=+4964.558744873" observedRunningTime="2026-03-14 06:55:12.0634981 +0000 UTC m=+4966.101758846" watchObservedRunningTime="2026-03-14 06:55:12.080581486 +0000 UTC m=+4966.118842232" Mar 14 06:55:14 crc kubenswrapper[4817]: I0314 06:55:14.068870 4817 generic.go:334] "Generic (PLEG): container finished" podID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerID="9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8" exitCode=0 Mar 14 06:55:14 crc kubenswrapper[4817]: I0314 06:55:14.068919 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerDied","Data":"9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8"} Mar 14 06:55:15 crc kubenswrapper[4817]: I0314 06:55:15.078722 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerStarted","Data":"8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066"} Mar 14 06:55:15 crc kubenswrapper[4817]: I0314 06:55:15.105031 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twvg9" podStartSLOduration=2.500003434 podStartE2EDuration="7.105009378s" podCreationTimestamp="2026-03-14 06:55:08 +0000 UTC" firstStartedPulling="2026-03-14 06:55:09.988087442 +0000 UTC m=+4964.026348188" lastFinishedPulling="2026-03-14 06:55:14.593093386 +0000 UTC m=+4968.631354132" observedRunningTime="2026-03-14 06:55:15.100012686 +0000 UTC m=+4969.138273452" watchObservedRunningTime="2026-03-14 06:55:15.105009378 +0000 UTC m=+4969.143270124" Mar 14 06:55:16 crc kubenswrapper[4817]: I0314 06:55:16.123871 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:16 crc kubenswrapper[4817]: I0314 06:55:16.124166 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:16 crc kubenswrapper[4817]: I0314 06:55:16.195584 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:17 crc kubenswrapper[4817]: I0314 06:55:17.143049 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:19 crc kubenswrapper[4817]: I0314 06:55:19.223345 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:19 crc kubenswrapper[4817]: I0314 06:55:19.223594 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:19 crc kubenswrapper[4817]: I0314 06:55:19.776224 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2m7z9"] Mar 14 06:55:19 crc kubenswrapper[4817]: I0314 06:55:19.776798 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2m7z9" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="registry-server" containerID="cri-o://6efd8b96f64ce24549e63a67ab83339a3b918a284d57376e455327e79c9da821" gracePeriod=2 Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.122948 4817 generic.go:334] "Generic (PLEG): container finished" podID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerID="6efd8b96f64ce24549e63a67ab83339a3b918a284d57376e455327e79c9da821" exitCode=0 Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.123204 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerDied","Data":"6efd8b96f64ce24549e63a67ab83339a3b918a284d57376e455327e79c9da821"} Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.277740 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.284780 4817 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-twvg9" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="registry-server" probeResult="failure" output=< Mar 14 06:55:20 crc kubenswrapper[4817]: timeout: failed to connect service ":50051" within 1s Mar 14 06:55:20 crc kubenswrapper[4817]: > Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.463473 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-catalog-content\") pod \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.463570 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-utilities\") pod \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.465166 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgsh7\" (UniqueName: \"kubernetes.io/projected/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-kube-api-access-cgsh7\") pod \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\" (UID: \"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb\") " Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.465625 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-utilities" (OuterVolumeSpecName: "utilities") pod "d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" (UID: "d1c73a7b-c57c-4651-9d8c-c5b92742e0eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.468115 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.474263 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-kube-api-access-cgsh7" (OuterVolumeSpecName: "kube-api-access-cgsh7") pod "d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" (UID: "d1c73a7b-c57c-4651-9d8c-c5b92742e0eb"). InnerVolumeSpecName "kube-api-access-cgsh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.537886 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" (UID: "d1c73a7b-c57c-4651-9d8c-c5b92742e0eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.569865 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgsh7\" (UniqueName: \"kubernetes.io/projected/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-kube-api-access-cgsh7\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:20 crc kubenswrapper[4817]: I0314 06:55:20.569911 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.133696 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2m7z9" event={"ID":"d1c73a7b-c57c-4651-9d8c-c5b92742e0eb","Type":"ContainerDied","Data":"20040f0cc8879a05dedb83f1e8b287d972bcde083baa420f5ea1d2ba70b72f8b"} Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.134088 4817 scope.go:117] "RemoveContainer" containerID="6efd8b96f64ce24549e63a67ab83339a3b918a284d57376e455327e79c9da821" Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.133774 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2m7z9" Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.154953 4817 scope.go:117] "RemoveContainer" containerID="34697b545d3d4f048e63b32fe4f82c78343180cfd03678c719e969f05b27ff70" Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.173910 4817 scope.go:117] "RemoveContainer" containerID="dcf38d52dd1c288410550463423c1478ca86c936aa396db7413a7dd4caf89bef" Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.181197 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2m7z9"] Mar 14 06:55:21 crc kubenswrapper[4817]: I0314 06:55:21.189716 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2m7z9"] Mar 14 06:55:22 crc kubenswrapper[4817]: I0314 06:55:22.743838 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" path="/var/lib/kubelet/pods/d1c73a7b-c57c-4651-9d8c-c5b92742e0eb/volumes" Mar 14 06:55:23 crc kubenswrapper[4817]: I0314 06:55:23.786485 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wlf2j_5791cc48-c47e-41dd-9679-e38124d37511/cert-manager-controller/0.log" Mar 14 06:55:23 crc kubenswrapper[4817]: I0314 06:55:23.952595 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xpmdf_15c7af3c-d545-4e70-b954-29763522ee1f/cert-manager-cainjector/0.log" Mar 14 06:55:24 crc kubenswrapper[4817]: I0314 06:55:24.056617 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-64mg6_c9590aff-9888-44dc-ab0c-47959f244b5e/cert-manager-webhook/0.log" Mar 14 06:55:29 crc kubenswrapper[4817]: I0314 06:55:29.278769 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:29 crc kubenswrapper[4817]: I0314 06:55:29.329495 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:29 crc kubenswrapper[4817]: I0314 06:55:29.519420 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twvg9"] Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.246359 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twvg9" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="registry-server" containerID="cri-o://8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066" gracePeriod=2 Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.734448 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zxtpk"] Mar 14 06:55:31 crc kubenswrapper[4817]: E0314 06:55:31.735154 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="registry-server" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.735172 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="registry-server" Mar 14 06:55:31 crc kubenswrapper[4817]: E0314 06:55:31.735190 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="extract-content" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.735197 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="extract-content" Mar 14 06:55:31 crc kubenswrapper[4817]: E0314 06:55:31.735206 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="extract-utilities" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.735213 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="extract-utilities" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.735390 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c73a7b-c57c-4651-9d8c-c5b92742e0eb" containerName="registry-server" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.738315 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.769928 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxtpk"] Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.800005 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895034 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-utilities\") pod \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895177 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-catalog-content\") pod \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895320 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52w4t\" (UniqueName: \"kubernetes.io/projected/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-kube-api-access-52w4t\") pod \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\" (UID: \"afa93ad9-e581-46e7-b8d3-d2ee1cb52238\") " Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895641 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-catalog-content\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895674 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-utilities\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895733 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2fv\" (UniqueName: \"kubernetes.io/projected/f989fb0b-787a-4a05-ab4b-49d5de6684e6-kube-api-access-nb2fv\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895746 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-utilities" (OuterVolumeSpecName: "utilities") pod "afa93ad9-e581-46e7-b8d3-d2ee1cb52238" (UID: "afa93ad9-e581-46e7-b8d3-d2ee1cb52238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.895881 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.902086 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-kube-api-access-52w4t" (OuterVolumeSpecName: "kube-api-access-52w4t") pod "afa93ad9-e581-46e7-b8d3-d2ee1cb52238" (UID: "afa93ad9-e581-46e7-b8d3-d2ee1cb52238"). InnerVolumeSpecName "kube-api-access-52w4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.997343 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-catalog-content\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.997404 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-utilities\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.997475 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2fv\" (UniqueName: \"kubernetes.io/projected/f989fb0b-787a-4a05-ab4b-49d5de6684e6-kube-api-access-nb2fv\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.997602 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52w4t\" (UniqueName: \"kubernetes.io/projected/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-kube-api-access-52w4t\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.998335 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-catalog-content\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:31 crc kubenswrapper[4817]: I0314 06:55:31.998354 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-utilities\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.020227 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2fv\" (UniqueName: \"kubernetes.io/projected/f989fb0b-787a-4a05-ab4b-49d5de6684e6-kube-api-access-nb2fv\") pod \"certified-operators-zxtpk\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.033718 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afa93ad9-e581-46e7-b8d3-d2ee1cb52238" (UID: "afa93ad9-e581-46e7-b8d3-d2ee1cb52238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.099442 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa93ad9-e581-46e7-b8d3-d2ee1cb52238-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.108813 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.290594 4817 generic.go:334] "Generic (PLEG): container finished" podID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerID="8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066" exitCode=0 Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.290842 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerDied","Data":"8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066"} Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.290869 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvg9" event={"ID":"afa93ad9-e581-46e7-b8d3-d2ee1cb52238","Type":"ContainerDied","Data":"b5c5c256a34252fcd12c7afd0c90d8c5c0522f778931f180e1f6ac1c9139c3a6"} Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.290886 4817 scope.go:117] "RemoveContainer" containerID="8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.291043 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twvg9" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.326563 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twvg9"] Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.342195 4817 scope.go:117] "RemoveContainer" containerID="9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.352238 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twvg9"] Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.377198 4817 scope.go:117] "RemoveContainer" containerID="253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.445943 4817 scope.go:117] "RemoveContainer" containerID="8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066" Mar 14 06:55:32 crc kubenswrapper[4817]: E0314 06:55:32.446353 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066\": container with ID starting with 8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066 not found: ID does not exist" containerID="8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.446385 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066"} err="failed to get container status \"8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066\": rpc error: code = NotFound desc = could not find container \"8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066\": container with ID starting with 8d0fd17feb21d0f33c2caba0ec0d21a69332dd4b87f6945bc02fd6a22d895066 not found: ID does not exist" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.446408 4817 scope.go:117] "RemoveContainer" containerID="9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8" Mar 14 06:55:32 crc kubenswrapper[4817]: E0314 06:55:32.446716 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8\": container with ID starting with 9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8 not found: ID does not exist" containerID="9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.446741 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8"} err="failed to get container status \"9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8\": rpc error: code = NotFound desc = could not find container \"9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8\": container with ID starting with 9842c3cf0f9421107a06f6d5a90092cc1efb2fde94de7774d6440ff7712318c8 not found: ID does not exist" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.446755 4817 scope.go:117] "RemoveContainer" containerID="253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b" Mar 14 06:55:32 crc kubenswrapper[4817]: E0314 06:55:32.447041 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b\": container with ID starting with 253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b not found: ID does not exist" containerID="253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b" Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.447062 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b"} err="failed to get container status \"253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b\": rpc error: code = NotFound desc = could not find container \"253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b\": container with ID starting with 253764ed398cbe04c253c63f1982d7758ac93f14c81a91cddd18a92bb645506b not found: ID does not exist" Mar 14 06:55:32 crc kubenswrapper[4817]: W0314 06:55:32.657175 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf989fb0b_787a_4a05_ab4b_49d5de6684e6.slice/crio-f4e8601b9bf3ab18ad351743caae3276d70282c4d61f6770fcab956c32bb2adc WatchSource:0}: Error finding container f4e8601b9bf3ab18ad351743caae3276d70282c4d61f6770fcab956c32bb2adc: Status 404 returned error can't find the container with id f4e8601b9bf3ab18ad351743caae3276d70282c4d61f6770fcab956c32bb2adc Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.657276 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxtpk"] Mar 14 06:55:32 crc kubenswrapper[4817]: I0314 06:55:32.743787 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" path="/var/lib/kubelet/pods/afa93ad9-e581-46e7-b8d3-d2ee1cb52238/volumes" Mar 14 06:55:33 crc kubenswrapper[4817]: I0314 06:55:33.302879 4817 generic.go:334] "Generic (PLEG): container finished" podID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerID="ee23d8c460d9bcafe7790fa340d60d4ee747cc09fc2be4d2bea870d8c31933f3" exitCode=0 Mar 14 06:55:33 crc kubenswrapper[4817]: I0314 06:55:33.303019 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerDied","Data":"ee23d8c460d9bcafe7790fa340d60d4ee747cc09fc2be4d2bea870d8c31933f3"} Mar 14 06:55:33 crc kubenswrapper[4817]: I0314 06:55:33.303377 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerStarted","Data":"f4e8601b9bf3ab18ad351743caae3276d70282c4d61f6770fcab956c32bb2adc"} Mar 14 06:55:34 crc kubenswrapper[4817]: I0314 06:55:34.318648 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerStarted","Data":"6d053524ff43385ec6ff2607324e08a3f417d49f49e56b5fc1b90a4954b4f8fd"} Mar 14 06:55:35 crc kubenswrapper[4817]: I0314 06:55:35.328787 4817 generic.go:334] "Generic (PLEG): container finished" podID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerID="6d053524ff43385ec6ff2607324e08a3f417d49f49e56b5fc1b90a4954b4f8fd" exitCode=0 Mar 14 06:55:35 crc kubenswrapper[4817]: I0314 06:55:35.328841 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerDied","Data":"6d053524ff43385ec6ff2607324e08a3f417d49f49e56b5fc1b90a4954b4f8fd"} Mar 14 06:55:36 crc kubenswrapper[4817]: I0314 06:55:36.339438 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerStarted","Data":"15f15dc2d109a07fb51426cbe89cbf9f4703ebfd6cf6974bcb9bf1b10aab2698"} Mar 14 06:55:36 crc kubenswrapper[4817]: I0314 06:55:36.363309 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zxtpk" podStartSLOduration=2.9344242769999997 podStartE2EDuration="5.363289814s" podCreationTimestamp="2026-03-14 06:55:31 +0000 UTC" firstStartedPulling="2026-03-14 06:55:33.305652576 +0000 UTC m=+4987.343913332" lastFinishedPulling="2026-03-14 06:55:35.734518123 +0000 UTC m=+4989.772778869" observedRunningTime="2026-03-14 06:55:36.357034806 +0000 UTC m=+4990.395295552" watchObservedRunningTime="2026-03-14 06:55:36.363289814 +0000 UTC m=+4990.401550560" Mar 14 06:55:38 crc kubenswrapper[4817]: I0314 06:55:38.566282 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:55:38 crc kubenswrapper[4817]: I0314 06:55:38.566622 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:55:38 crc kubenswrapper[4817]: I0314 06:55:38.566669 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:55:38 crc kubenswrapper[4817]: I0314 06:55:38.567510 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a981ac978b9b864b85dc9598c79af70065a145631ca5c16d368284a9ee5e7be8"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:55:38 crc kubenswrapper[4817]: I0314 06:55:38.567565 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://a981ac978b9b864b85dc9598c79af70065a145631ca5c16d368284a9ee5e7be8" gracePeriod=600 Mar 14 06:55:38 crc kubenswrapper[4817]: I0314 06:55:38.762088 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-q5lsz_2adf871a-8f81-480f-9f20-afe8cfeb93f5/nmstate-console-plugin/0.log" Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.024557 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-8frq4_6b497a2e-9cc7-4484-963f-b2e84eb3681a/kube-rbac-proxy/0.log" Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.049193 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bkggf_c99a8dbf-19cc-401a-8569-0add8d2a31bb/nmstate-handler/0.log" Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.141098 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-8frq4_6b497a2e-9cc7-4484-963f-b2e84eb3681a/nmstate-metrics/0.log" Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.280969 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-dnl76_1bf623dc-b3e0-45af-9273-bc1367d82ab3/nmstate-operator/0.log" Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.342524 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-d565t_ccadd686-fc95-409d-b7a4-b2e797265e56/nmstate-webhook/0.log" Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.368457 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="a981ac978b9b864b85dc9598c79af70065a145631ca5c16d368284a9ee5e7be8" exitCode=0 Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.368494 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"a981ac978b9b864b85dc9598c79af70065a145631ca5c16d368284a9ee5e7be8"} Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.368521 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerStarted","Data":"0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3"} Mar 14 06:55:39 crc kubenswrapper[4817]: I0314 06:55:39.368537 4817 scope.go:117] "RemoveContainer" containerID="da407caaf3cb471ff0cd03efce491bf53fc49835b817480566d712b8c1c7a235" Mar 14 06:55:42 crc kubenswrapper[4817]: I0314 06:55:42.109224 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:42 crc kubenswrapper[4817]: I0314 06:55:42.110914 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:42 crc kubenswrapper[4817]: I0314 06:55:42.174634 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:42 crc kubenswrapper[4817]: I0314 06:55:42.474277 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:42 crc kubenswrapper[4817]: I0314 06:55:42.534297 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxtpk"] Mar 14 06:55:44 crc kubenswrapper[4817]: I0314 06:55:44.415785 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zxtpk" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="registry-server" containerID="cri-o://15f15dc2d109a07fb51426cbe89cbf9f4703ebfd6cf6974bcb9bf1b10aab2698" gracePeriod=2 Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.427525 4817 generic.go:334] "Generic (PLEG): container finished" podID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerID="15f15dc2d109a07fb51426cbe89cbf9f4703ebfd6cf6974bcb9bf1b10aab2698" exitCode=0 Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.427620 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerDied","Data":"15f15dc2d109a07fb51426cbe89cbf9f4703ebfd6cf6974bcb9bf1b10aab2698"} Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.742335 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.912246 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-catalog-content\") pod \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.912429 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb2fv\" (UniqueName: \"kubernetes.io/projected/f989fb0b-787a-4a05-ab4b-49d5de6684e6-kube-api-access-nb2fv\") pod \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.912465 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-utilities\") pod \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\" (UID: \"f989fb0b-787a-4a05-ab4b-49d5de6684e6\") " Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.913420 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-utilities" (OuterVolumeSpecName: "utilities") pod "f989fb0b-787a-4a05-ab4b-49d5de6684e6" (UID: "f989fb0b-787a-4a05-ab4b-49d5de6684e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.924029 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f989fb0b-787a-4a05-ab4b-49d5de6684e6-kube-api-access-nb2fv" (OuterVolumeSpecName: "kube-api-access-nb2fv") pod "f989fb0b-787a-4a05-ab4b-49d5de6684e6" (UID: "f989fb0b-787a-4a05-ab4b-49d5de6684e6"). InnerVolumeSpecName "kube-api-access-nb2fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:55:45 crc kubenswrapper[4817]: I0314 06:55:45.985566 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f989fb0b-787a-4a05-ab4b-49d5de6684e6" (UID: "f989fb0b-787a-4a05-ab4b-49d5de6684e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.015665 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb2fv\" (UniqueName: \"kubernetes.io/projected/f989fb0b-787a-4a05-ab4b-49d5de6684e6-kube-api-access-nb2fv\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.016255 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.016341 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f989fb0b-787a-4a05-ab4b-49d5de6684e6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.437649 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxtpk" event={"ID":"f989fb0b-787a-4a05-ab4b-49d5de6684e6","Type":"ContainerDied","Data":"f4e8601b9bf3ab18ad351743caae3276d70282c4d61f6770fcab956c32bb2adc"} Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.438012 4817 scope.go:117] "RemoveContainer" containerID="15f15dc2d109a07fb51426cbe89cbf9f4703ebfd6cf6974bcb9bf1b10aab2698" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.437730 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxtpk" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.461056 4817 scope.go:117] "RemoveContainer" containerID="6d053524ff43385ec6ff2607324e08a3f417d49f49e56b5fc1b90a4954b4f8fd" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.483431 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxtpk"] Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.490700 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zxtpk"] Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.741459 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" path="/var/lib/kubelet/pods/f989fb0b-787a-4a05-ab4b-49d5de6684e6/volumes" Mar 14 06:55:46 crc kubenswrapper[4817]: I0314 06:55:46.967556 4817 scope.go:117] "RemoveContainer" containerID="ee23d8c460d9bcafe7790fa340d60d4ee747cc09fc2be4d2bea870d8c31933f3" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.146653 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557856-666hl"] Mar 14 06:56:00 crc kubenswrapper[4817]: E0314 06:56:00.147468 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="extract-content" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147479 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="extract-content" Mar 14 06:56:00 crc kubenswrapper[4817]: E0314 06:56:00.147492 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="registry-server" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147498 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="registry-server" Mar 14 06:56:00 crc kubenswrapper[4817]: E0314 06:56:00.147513 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="extract-utilities" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147519 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="extract-utilities" Mar 14 06:56:00 crc kubenswrapper[4817]: E0314 06:56:00.147531 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="extract-content" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147538 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="extract-content" Mar 14 06:56:00 crc kubenswrapper[4817]: E0314 06:56:00.147553 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="registry-server" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147559 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="registry-server" Mar 14 06:56:00 crc kubenswrapper[4817]: E0314 06:56:00.147568 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="extract-utilities" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147574 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="extract-utilities" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147743 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa93ad9-e581-46e7-b8d3-d2ee1cb52238" containerName="registry-server" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.147774 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="f989fb0b-787a-4a05-ab4b-49d5de6684e6" containerName="registry-server" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.148362 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.151102 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.151428 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.157298 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.163377 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557856-666hl"] Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.251457 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xm94\" (UniqueName: \"kubernetes.io/projected/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37-kube-api-access-7xm94\") pod \"auto-csr-approver-29557856-666hl\" (UID: \"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37\") " pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.353873 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xm94\" (UniqueName: \"kubernetes.io/projected/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37-kube-api-access-7xm94\") pod \"auto-csr-approver-29557856-666hl\" (UID: \"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37\") " pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.375978 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xm94\" (UniqueName: \"kubernetes.io/projected/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37-kube-api-access-7xm94\") pod \"auto-csr-approver-29557856-666hl\" (UID: \"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37\") " pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.468771 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:00 crc kubenswrapper[4817]: I0314 06:56:00.923356 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557856-666hl"] Mar 14 06:56:01 crc kubenswrapper[4817]: I0314 06:56:01.592870 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557856-666hl" event={"ID":"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37","Type":"ContainerStarted","Data":"1845a33308fd0f807cd8971640ee89f00d92406066431edafc921d08c595e696"} Mar 14 06:56:02 crc kubenswrapper[4817]: I0314 06:56:02.603145 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557856-666hl" event={"ID":"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37","Type":"ContainerStarted","Data":"8c3c23bd74a8e8a98106ccafc98b60700544d1d786a0a724b64b51498b242837"} Mar 14 06:56:02 crc kubenswrapper[4817]: I0314 06:56:02.627404 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557856-666hl" podStartSLOduration=1.376405467 podStartE2EDuration="2.627380681s" podCreationTimestamp="2026-03-14 06:56:00 +0000 UTC" firstStartedPulling="2026-03-14 06:56:00.933192882 +0000 UTC m=+5014.971453628" lastFinishedPulling="2026-03-14 06:56:02.184168096 +0000 UTC m=+5016.222428842" observedRunningTime="2026-03-14 06:56:02.615260926 +0000 UTC m=+5016.653521672" watchObservedRunningTime="2026-03-14 06:56:02.627380681 +0000 UTC m=+5016.665641417" Mar 14 06:56:03 crc kubenswrapper[4817]: I0314 06:56:03.613762 4817 generic.go:334] "Generic (PLEG): container finished" podID="58c48f51-ffaf-484b-bd9b-4f2dae9c8e37" containerID="8c3c23bd74a8e8a98106ccafc98b60700544d1d786a0a724b64b51498b242837" exitCode=0 Mar 14 06:56:03 crc kubenswrapper[4817]: I0314 06:56:03.614157 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557856-666hl" event={"ID":"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37","Type":"ContainerDied","Data":"8c3c23bd74a8e8a98106ccafc98b60700544d1d786a0a724b64b51498b242837"} Mar 14 06:56:04 crc kubenswrapper[4817]: I0314 06:56:04.998645 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.153774 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xm94\" (UniqueName: \"kubernetes.io/projected/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37-kube-api-access-7xm94\") pod \"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37\" (UID: \"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37\") " Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.167946 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37-kube-api-access-7xm94" (OuterVolumeSpecName: "kube-api-access-7xm94") pod "58c48f51-ffaf-484b-bd9b-4f2dae9c8e37" (UID: "58c48f51-ffaf-484b-bd9b-4f2dae9c8e37"). InnerVolumeSpecName "kube-api-access-7xm94". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.256024 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xm94\" (UniqueName: \"kubernetes.io/projected/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37-kube-api-access-7xm94\") on node \"crc\" DevicePath \"\"" Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.637738 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557856-666hl" event={"ID":"58c48f51-ffaf-484b-bd9b-4f2dae9c8e37","Type":"ContainerDied","Data":"1845a33308fd0f807cd8971640ee89f00d92406066431edafc921d08c595e696"} Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.638050 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1845a33308fd0f807cd8971640ee89f00d92406066431edafc921d08c595e696" Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.637800 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557856-666hl" Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.728860 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557850-8m6zd"] Mar 14 06:56:05 crc kubenswrapper[4817]: I0314 06:56:05.744491 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557850-8m6zd"] Mar 14 06:56:06 crc kubenswrapper[4817]: I0314 06:56:06.745469 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61" path="/var/lib/kubelet/pods/e9c1c852-4fd0-4e16-8d7e-e1f0fb8a8a61/volumes" Mar 14 06:56:11 crc kubenswrapper[4817]: I0314 06:56:11.737470 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-88cxz_e421e9b6-37af-4150-8a96-419fe1e1f267/kube-rbac-proxy/0.log" Mar 14 06:56:11 crc kubenswrapper[4817]: I0314 06:56:11.850871 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-88cxz_e421e9b6-37af-4150-8a96-419fe1e1f267/controller/0.log" Mar 14 06:56:11 crc kubenswrapper[4817]: I0314 06:56:11.962107 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.104931 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.128624 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.156079 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.206204 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.318448 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.319238 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.344996 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.364738 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.558934 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-reloader/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.567745 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-frr-files/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.655793 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/controller/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.667283 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/cp-metrics/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.770490 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/frr-metrics/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.877053 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/kube-rbac-proxy-frr/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.908095 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/kube-rbac-proxy/0.log" Mar 14 06:56:12 crc kubenswrapper[4817]: I0314 06:56:12.960116 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/reloader/0.log" Mar 14 06:56:13 crc kubenswrapper[4817]: I0314 06:56:13.138771 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-hqjc5_1393ab55-8647-4411-86f7-a034c8bbd227/frr-k8s-webhook-server/0.log" Mar 14 06:56:13 crc kubenswrapper[4817]: I0314 06:56:13.355044 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f7859bbb-rtk7b_55748352-cae5-4b0d-8d5d-ed70b1e62fbd/manager/0.log" Mar 14 06:56:13 crc kubenswrapper[4817]: I0314 06:56:13.449038 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-757d57bdfc-8q5gf_707a2b72-26b7-48a9-b7e6-dcf7989deb6b/webhook-server/0.log" Mar 14 06:56:13 crc kubenswrapper[4817]: I0314 06:56:13.612860 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jntvk_420d83f8-e6b6-4433-8e63-ae624bcf1241/kube-rbac-proxy/0.log" Mar 14 06:56:14 crc kubenswrapper[4817]: I0314 06:56:14.105406 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jntvk_420d83f8-e6b6-4433-8e63-ae624bcf1241/speaker/0.log" Mar 14 06:56:14 crc kubenswrapper[4817]: I0314 06:56:14.688615 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t5x6v_4887eb15-670e-4460-9df4-f50ff914238e/frr/0.log" Mar 14 06:56:29 crc kubenswrapper[4817]: I0314 06:56:29.749168 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/util/0.log" Mar 14 06:56:29 crc kubenswrapper[4817]: I0314 06:56:29.906642 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/pull/0.log" Mar 14 06:56:29 crc kubenswrapper[4817]: I0314 06:56:29.931980 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/pull/0.log" Mar 14 06:56:29 crc kubenswrapper[4817]: I0314 06:56:29.933629 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/util/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.130738 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/extract/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.143412 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/pull/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.184955 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v58tk_ee827efa-cd10-4520-928c-42dbfd6ab1ef/util/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.323878 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/util/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.485942 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/util/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.503565 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/pull/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.503773 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/pull/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.667118 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/util/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.686200 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/pull/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.741604 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d7j9q_8ab5ccb3-d22e-4570-a16b-c553919536d3/extract/0.log" Mar 14 06:56:30 crc kubenswrapper[4817]: I0314 06:56:30.860860 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-utilities/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.061316 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-content/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.062903 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-content/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.080678 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-utilities/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.220747 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-utilities/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.266786 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/extract-content/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.470719 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-utilities/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.612733 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-content/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.663510 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-utilities/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.783675 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-content/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.877437 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-utilities/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.939660 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/extract-content/0.log" Mar 14 06:56:31 crc kubenswrapper[4817]: I0314 06:56:31.952998 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zvghb_3b88a2c6-2091-4fe2-b635-16f4c1133ea7/registry-server/0.log" Mar 14 06:56:32 crc kubenswrapper[4817]: I0314 06:56:32.403988 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-75895_2d671e4a-c87a-43b1-9bf6-bec660b13dc4/registry-server/0.log" Mar 14 06:56:32 crc kubenswrapper[4817]: I0314 06:56:32.716243 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gtb86_22e59375-f50e-4050-aeeb-a305ffcb3572/marketplace-operator/0.log" Mar 14 06:56:32 crc kubenswrapper[4817]: I0314 06:56:32.753681 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-utilities/0.log" Mar 14 06:56:32 crc kubenswrapper[4817]: I0314 06:56:32.944191 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-utilities/0.log" Mar 14 06:56:32 crc kubenswrapper[4817]: I0314 06:56:32.951427 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-content/0.log" Mar 14 06:56:32 crc kubenswrapper[4817]: I0314 06:56:32.969113 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-content/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.129930 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-utilities/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.134160 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/extract-content/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.271483 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bb7k9_28741a2a-08fb-480b-8e68-91df7ddee923/registry-server/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.321229 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-utilities/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.488737 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-content/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.492229 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-utilities/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.515223 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-content/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.670055 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-content/0.log" Mar 14 06:56:33 crc kubenswrapper[4817]: I0314 06:56:33.671005 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/extract-utilities/0.log" Mar 14 06:56:34 crc kubenswrapper[4817]: I0314 06:56:34.506362 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l8vr9_57c287ae-7267-4b96-b901-70a4171a6747/registry-server/0.log" Mar 14 06:56:48 crc kubenswrapper[4817]: I0314 06:56:48.766882 4817 scope.go:117] "RemoveContainer" containerID="25c685a01e92c8182e3972a316f563c9a4c0087058e6360967cb65b4235515e4" Mar 14 06:56:51 crc kubenswrapper[4817]: E0314 06:56:51.249203 4817 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.29:56994->38.129.56.29:42869: write tcp 38.129.56.29:56994->38.129.56.29:42869: write: broken pipe Mar 14 06:57:38 crc kubenswrapper[4817]: I0314 06:57:38.565947 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:57:38 crc kubenswrapper[4817]: I0314 06:57:38.566547 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.145085 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557858-twrlj"] Mar 14 06:58:00 crc kubenswrapper[4817]: E0314 06:58:00.146258 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c48f51-ffaf-484b-bd9b-4f2dae9c8e37" containerName="oc" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.146282 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c48f51-ffaf-484b-bd9b-4f2dae9c8e37" containerName="oc" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.146591 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c48f51-ffaf-484b-bd9b-4f2dae9c8e37" containerName="oc" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.147527 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.153840 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.154405 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.160381 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.171201 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557858-twrlj"] Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.194748 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmp4\" (UniqueName: \"kubernetes.io/projected/43939ce1-411c-4898-8157-51b57e4acd2d-kube-api-access-vjmp4\") pod \"auto-csr-approver-29557858-twrlj\" (UID: \"43939ce1-411c-4898-8157-51b57e4acd2d\") " pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.296381 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmp4\" (UniqueName: \"kubernetes.io/projected/43939ce1-411c-4898-8157-51b57e4acd2d-kube-api-access-vjmp4\") pod \"auto-csr-approver-29557858-twrlj\" (UID: \"43939ce1-411c-4898-8157-51b57e4acd2d\") " pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.319344 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmp4\" (UniqueName: \"kubernetes.io/projected/43939ce1-411c-4898-8157-51b57e4acd2d-kube-api-access-vjmp4\") pod \"auto-csr-approver-29557858-twrlj\" (UID: \"43939ce1-411c-4898-8157-51b57e4acd2d\") " pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:00 crc kubenswrapper[4817]: I0314 06:58:00.486543 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:01 crc kubenswrapper[4817]: I0314 06:58:01.027886 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557858-twrlj"] Mar 14 06:58:01 crc kubenswrapper[4817]: I0314 06:58:01.866378 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557858-twrlj" event={"ID":"43939ce1-411c-4898-8157-51b57e4acd2d","Type":"ContainerStarted","Data":"cb12ee01b5301e461d8fd70f64981dba6fd95cd7d15dd649433c41a93381919e"} Mar 14 06:58:02 crc kubenswrapper[4817]: I0314 06:58:02.874912 4817 generic.go:334] "Generic (PLEG): container finished" podID="43939ce1-411c-4898-8157-51b57e4acd2d" containerID="afcec99b3a57c6e732d8ab15b0157545af0c5028a5f1dc0a495ff408568451be" exitCode=0 Mar 14 06:58:02 crc kubenswrapper[4817]: I0314 06:58:02.874995 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557858-twrlj" event={"ID":"43939ce1-411c-4898-8157-51b57e4acd2d","Type":"ContainerDied","Data":"afcec99b3a57c6e732d8ab15b0157545af0c5028a5f1dc0a495ff408568451be"} Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.425432 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.500526 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjmp4\" (UniqueName: \"kubernetes.io/projected/43939ce1-411c-4898-8157-51b57e4acd2d-kube-api-access-vjmp4\") pod \"43939ce1-411c-4898-8157-51b57e4acd2d\" (UID: \"43939ce1-411c-4898-8157-51b57e4acd2d\") " Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.507637 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43939ce1-411c-4898-8157-51b57e4acd2d-kube-api-access-vjmp4" (OuterVolumeSpecName: "kube-api-access-vjmp4") pod "43939ce1-411c-4898-8157-51b57e4acd2d" (UID: "43939ce1-411c-4898-8157-51b57e4acd2d"). InnerVolumeSpecName "kube-api-access-vjmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.603966 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjmp4\" (UniqueName: \"kubernetes.io/projected/43939ce1-411c-4898-8157-51b57e4acd2d-kube-api-access-vjmp4\") on node \"crc\" DevicePath \"\"" Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.906076 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557858-twrlj" event={"ID":"43939ce1-411c-4898-8157-51b57e4acd2d","Type":"ContainerDied","Data":"cb12ee01b5301e461d8fd70f64981dba6fd95cd7d15dd649433c41a93381919e"} Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.906118 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb12ee01b5301e461d8fd70f64981dba6fd95cd7d15dd649433c41a93381919e" Mar 14 06:58:04 crc kubenswrapper[4817]: I0314 06:58:04.906666 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557858-twrlj" Mar 14 06:58:05 crc kubenswrapper[4817]: I0314 06:58:05.515643 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557852-tvz6j"] Mar 14 06:58:05 crc kubenswrapper[4817]: I0314 06:58:05.528697 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557852-tvz6j"] Mar 14 06:58:06 crc kubenswrapper[4817]: I0314 06:58:06.749749 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f66bec4-4a89-41af-89c2-0cc4c132cd5c" path="/var/lib/kubelet/pods/2f66bec4-4a89-41af-89c2-0cc4c132cd5c/volumes" Mar 14 06:58:08 crc kubenswrapper[4817]: I0314 06:58:08.565770 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:58:08 crc kubenswrapper[4817]: I0314 06:58:08.566428 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:58:34 crc kubenswrapper[4817]: I0314 06:58:34.238190 4817 generic.go:334] "Generic (PLEG): container finished" podID="be945528-09ce-4135-b245-447831cdc494" containerID="b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e" exitCode=0 Mar 14 06:58:34 crc kubenswrapper[4817]: I0314 06:58:34.238476 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7rgld/must-gather-f8j2n" event={"ID":"be945528-09ce-4135-b245-447831cdc494","Type":"ContainerDied","Data":"b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e"} Mar 14 06:58:34 crc kubenswrapper[4817]: I0314 06:58:34.239633 4817 scope.go:117] "RemoveContainer" containerID="b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e" Mar 14 06:58:34 crc kubenswrapper[4817]: I0314 06:58:34.654414 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7rgld_must-gather-f8j2n_be945528-09ce-4135-b245-447831cdc494/gather/0.log" Mar 14 06:58:38 crc kubenswrapper[4817]: I0314 06:58:38.566057 4817 patch_prober.go:28] interesting pod/machine-config-daemon-f8hwl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 06:58:38 crc kubenswrapper[4817]: I0314 06:58:38.566756 4817 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 06:58:38 crc kubenswrapper[4817]: I0314 06:58:38.566818 4817 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" Mar 14 06:58:38 crc kubenswrapper[4817]: I0314 06:58:38.568096 4817 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3"} pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 06:58:38 crc kubenswrapper[4817]: I0314 06:58:38.568177 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerName="machine-config-daemon" containerID="cri-o://0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" gracePeriod=600 Mar 14 06:58:39 crc kubenswrapper[4817]: E0314 06:58:39.211765 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:58:39 crc kubenswrapper[4817]: I0314 06:58:39.302471 4817 generic.go:334] "Generic (PLEG): container finished" podID="676c3e1e-370b-4a49-80c6-27422d2d1d56" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" exitCode=0 Mar 14 06:58:39 crc kubenswrapper[4817]: I0314 06:58:39.302543 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" event={"ID":"676c3e1e-370b-4a49-80c6-27422d2d1d56","Type":"ContainerDied","Data":"0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3"} Mar 14 06:58:39 crc kubenswrapper[4817]: I0314 06:58:39.302599 4817 scope.go:117] "RemoveContainer" containerID="a981ac978b9b864b85dc9598c79af70065a145631ca5c16d368284a9ee5e7be8" Mar 14 06:58:39 crc kubenswrapper[4817]: I0314 06:58:39.303465 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 06:58:39 crc kubenswrapper[4817]: E0314 06:58:39.303929 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:58:45 crc kubenswrapper[4817]: I0314 06:58:45.695451 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7rgld/must-gather-f8j2n"] Mar 14 06:58:45 crc kubenswrapper[4817]: I0314 06:58:45.696366 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7rgld/must-gather-f8j2n" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="copy" containerID="cri-o://12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b" gracePeriod=2 Mar 14 06:58:45 crc kubenswrapper[4817]: I0314 06:58:45.705531 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7rgld/must-gather-f8j2n"] Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.296598 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7rgld_must-gather-f8j2n_be945528-09ce-4135-b245-447831cdc494/copy/0.log" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.298112 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.398865 4817 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7rgld_must-gather-f8j2n_be945528-09ce-4135-b245-447831cdc494/copy/0.log" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.399399 4817 generic.go:334] "Generic (PLEG): container finished" podID="be945528-09ce-4135-b245-447831cdc494" containerID="12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b" exitCode=143 Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.399455 4817 scope.go:117] "RemoveContainer" containerID="12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.399591 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7rgld/must-gather-f8j2n" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.407841 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be945528-09ce-4135-b245-447831cdc494-must-gather-output\") pod \"be945528-09ce-4135-b245-447831cdc494\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.407967 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmc8w\" (UniqueName: \"kubernetes.io/projected/be945528-09ce-4135-b245-447831cdc494-kube-api-access-tmc8w\") pod \"be945528-09ce-4135-b245-447831cdc494\" (UID: \"be945528-09ce-4135-b245-447831cdc494\") " Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.417325 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be945528-09ce-4135-b245-447831cdc494-kube-api-access-tmc8w" (OuterVolumeSpecName: "kube-api-access-tmc8w") pod "be945528-09ce-4135-b245-447831cdc494" (UID: "be945528-09ce-4135-b245-447831cdc494"). InnerVolumeSpecName "kube-api-access-tmc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.440963 4817 scope.go:117] "RemoveContainer" containerID="b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.511566 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmc8w\" (UniqueName: \"kubernetes.io/projected/be945528-09ce-4135-b245-447831cdc494-kube-api-access-tmc8w\") on node \"crc\" DevicePath \"\"" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.642281 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be945528-09ce-4135-b245-447831cdc494-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "be945528-09ce-4135-b245-447831cdc494" (UID: "be945528-09ce-4135-b245-447831cdc494"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.718185 4817 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/be945528-09ce-4135-b245-447831cdc494-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 14 06:58:46 crc kubenswrapper[4817]: I0314 06:58:46.776149 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be945528-09ce-4135-b245-447831cdc494" path="/var/lib/kubelet/pods/be945528-09ce-4135-b245-447831cdc494/volumes" Mar 14 06:58:47 crc kubenswrapper[4817]: I0314 06:58:47.033885 4817 scope.go:117] "RemoveContainer" containerID="12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b" Mar 14 06:58:47 crc kubenswrapper[4817]: E0314 06:58:47.034668 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b\": container with ID starting with 12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b not found: ID does not exist" containerID="12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b" Mar 14 06:58:47 crc kubenswrapper[4817]: I0314 06:58:47.034718 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b"} err="failed to get container status \"12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b\": rpc error: code = NotFound desc = could not find container \"12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b\": container with ID starting with 12cdd7fd134ebfd0e4b2624a66a0a3cf6140453fe3c8c350c0407a3eb10eae5b not found: ID does not exist" Mar 14 06:58:47 crc kubenswrapper[4817]: I0314 06:58:47.034739 4817 scope.go:117] "RemoveContainer" containerID="b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e" Mar 14 06:58:47 crc kubenswrapper[4817]: E0314 06:58:47.035235 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e\": container with ID starting with b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e not found: ID does not exist" containerID="b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e" Mar 14 06:58:47 crc kubenswrapper[4817]: I0314 06:58:47.035284 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e"} err="failed to get container status \"b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e\": rpc error: code = NotFound desc = could not find container \"b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e\": container with ID starting with b4ba1145270d95da1b969b24929650e7f054e98de32723ccf1d0c104fa1ab97e not found: ID does not exist" Mar 14 06:58:48 crc kubenswrapper[4817]: I0314 06:58:48.889533 4817 scope.go:117] "RemoveContainer" containerID="490e9d82af386bf868da4cec87b3cf2eae83481c35ebac17ea6aee1de55bc79d" Mar 14 06:58:52 crc kubenswrapper[4817]: I0314 06:58:52.733065 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 06:58:52 crc kubenswrapper[4817]: E0314 06:58:52.734127 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:59:07 crc kubenswrapper[4817]: I0314 06:59:07.732036 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 06:59:07 crc kubenswrapper[4817]: E0314 06:59:07.733048 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:59:19 crc kubenswrapper[4817]: I0314 06:59:19.746942 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 06:59:19 crc kubenswrapper[4817]: E0314 06:59:19.748576 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:59:31 crc kubenswrapper[4817]: I0314 06:59:31.733345 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 06:59:31 crc kubenswrapper[4817]: E0314 06:59:31.734625 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 06:59:46 crc kubenswrapper[4817]: I0314 06:59:46.744623 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 06:59:46 crc kubenswrapper[4817]: E0314 06:59:46.746040 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.150090 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557860-2fbmq"] Mar 14 07:00:00 crc kubenswrapper[4817]: E0314 07:00:00.151212 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="gather" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.151228 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="gather" Mar 14 07:00:00 crc kubenswrapper[4817]: E0314 07:00:00.151248 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="copy" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.151257 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="copy" Mar 14 07:00:00 crc kubenswrapper[4817]: E0314 07:00:00.151271 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43939ce1-411c-4898-8157-51b57e4acd2d" containerName="oc" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.151281 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="43939ce1-411c-4898-8157-51b57e4acd2d" containerName="oc" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.151525 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="gather" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.151540 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945528-09ce-4135-b245-447831cdc494" containerName="copy" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.151551 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="43939ce1-411c-4898-8157-51b57e4acd2d" containerName="oc" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.152357 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.156881 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.157082 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.156976 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.178575 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh"] Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.180256 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.182912 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.183143 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.202272 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557860-2fbmq"] Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.212260 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh"] Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.265582 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/57bcbeb4-a151-4308-b4f5-5e61626fcac3-secret-volume\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.265635 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sgc\" (UniqueName: \"kubernetes.io/projected/57bcbeb4-a151-4308-b4f5-5e61626fcac3-kube-api-access-z5sgc\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.265801 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd45h\" (UniqueName: \"kubernetes.io/projected/9809df95-4819-43c2-907a-3f29ccedc696-kube-api-access-gd45h\") pod \"auto-csr-approver-29557860-2fbmq\" (UID: \"9809df95-4819-43c2-907a-3f29ccedc696\") " pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.265847 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57bcbeb4-a151-4308-b4f5-5e61626fcac3-config-volume\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.368038 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/57bcbeb4-a151-4308-b4f5-5e61626fcac3-secret-volume\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.368120 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sgc\" (UniqueName: \"kubernetes.io/projected/57bcbeb4-a151-4308-b4f5-5e61626fcac3-kube-api-access-z5sgc\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.368304 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd45h\" (UniqueName: \"kubernetes.io/projected/9809df95-4819-43c2-907a-3f29ccedc696-kube-api-access-gd45h\") pod \"auto-csr-approver-29557860-2fbmq\" (UID: \"9809df95-4819-43c2-907a-3f29ccedc696\") " pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.368369 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57bcbeb4-a151-4308-b4f5-5e61626fcac3-config-volume\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.370363 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57bcbeb4-a151-4308-b4f5-5e61626fcac3-config-volume\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.393721 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/57bcbeb4-a151-4308-b4f5-5e61626fcac3-secret-volume\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.405124 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd45h\" (UniqueName: \"kubernetes.io/projected/9809df95-4819-43c2-907a-3f29ccedc696-kube-api-access-gd45h\") pod \"auto-csr-approver-29557860-2fbmq\" (UID: \"9809df95-4819-43c2-907a-3f29ccedc696\") " pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.407436 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sgc\" (UniqueName: \"kubernetes.io/projected/57bcbeb4-a151-4308-b4f5-5e61626fcac3-kube-api-access-z5sgc\") pod \"collect-profiles-29557860-6f5fh\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.477465 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.499861 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.733320 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:00:00 crc kubenswrapper[4817]: E0314 07:00:00.733586 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.947031 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557860-2fbmq"] Mar 14 07:00:00 crc kubenswrapper[4817]: I0314 07:00:00.954013 4817 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 07:00:01 crc kubenswrapper[4817]: I0314 07:00:01.027577 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh"] Mar 14 07:00:01 crc kubenswrapper[4817]: W0314 07:00:01.036142 4817 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57bcbeb4_a151_4308_b4f5_5e61626fcac3.slice/crio-545a98b8448da5257336d05ebab4c8f07cc7cab3afe232aee6f7d24385db8bda WatchSource:0}: Error finding container 545a98b8448da5257336d05ebab4c8f07cc7cab3afe232aee6f7d24385db8bda: Status 404 returned error can't find the container with id 545a98b8448da5257336d05ebab4c8f07cc7cab3afe232aee6f7d24385db8bda Mar 14 07:00:01 crc kubenswrapper[4817]: I0314 07:00:01.227469 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" event={"ID":"57bcbeb4-a151-4308-b4f5-5e61626fcac3","Type":"ContainerStarted","Data":"bb7cd558815b825a56d1bccbdbb9b19ae773cbfe84e80bd7c5575a45b0ebe950"} Mar 14 07:00:01 crc kubenswrapper[4817]: I0314 07:00:01.227770 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" event={"ID":"57bcbeb4-a151-4308-b4f5-5e61626fcac3","Type":"ContainerStarted","Data":"545a98b8448da5257336d05ebab4c8f07cc7cab3afe232aee6f7d24385db8bda"} Mar 14 07:00:01 crc kubenswrapper[4817]: I0314 07:00:01.228917 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" event={"ID":"9809df95-4819-43c2-907a-3f29ccedc696","Type":"ContainerStarted","Data":"4996940db7f6d279bc3df052c04cbf1570332d740f0510e44d37401210c23dbf"} Mar 14 07:00:01 crc kubenswrapper[4817]: I0314 07:00:01.249859 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" podStartSLOduration=1.249839396 podStartE2EDuration="1.249839396s" podCreationTimestamp="2026-03-14 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:00:01.246380858 +0000 UTC m=+5255.284641644" watchObservedRunningTime="2026-03-14 07:00:01.249839396 +0000 UTC m=+5255.288100132" Mar 14 07:00:02 crc kubenswrapper[4817]: I0314 07:00:02.242802 4817 generic.go:334] "Generic (PLEG): container finished" podID="57bcbeb4-a151-4308-b4f5-5e61626fcac3" containerID="bb7cd558815b825a56d1bccbdbb9b19ae773cbfe84e80bd7c5575a45b0ebe950" exitCode=0 Mar 14 07:00:02 crc kubenswrapper[4817]: I0314 07:00:02.242989 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" event={"ID":"57bcbeb4-a151-4308-b4f5-5e61626fcac3","Type":"ContainerDied","Data":"bb7cd558815b825a56d1bccbdbb9b19ae773cbfe84e80bd7c5575a45b0ebe950"} Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.691085 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.746094 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57bcbeb4-a151-4308-b4f5-5e61626fcac3-config-volume\") pod \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.746256 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/57bcbeb4-a151-4308-b4f5-5e61626fcac3-secret-volume\") pod \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.746315 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5sgc\" (UniqueName: \"kubernetes.io/projected/57bcbeb4-a151-4308-b4f5-5e61626fcac3-kube-api-access-z5sgc\") pod \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\" (UID: \"57bcbeb4-a151-4308-b4f5-5e61626fcac3\") " Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.747071 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57bcbeb4-a151-4308-b4f5-5e61626fcac3-config-volume" (OuterVolumeSpecName: "config-volume") pod "57bcbeb4-a151-4308-b4f5-5e61626fcac3" (UID: "57bcbeb4-a151-4308-b4f5-5e61626fcac3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.754161 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bcbeb4-a151-4308-b4f5-5e61626fcac3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "57bcbeb4-a151-4308-b4f5-5e61626fcac3" (UID: "57bcbeb4-a151-4308-b4f5-5e61626fcac3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.754546 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bcbeb4-a151-4308-b4f5-5e61626fcac3-kube-api-access-z5sgc" (OuterVolumeSpecName: "kube-api-access-z5sgc") pod "57bcbeb4-a151-4308-b4f5-5e61626fcac3" (UID: "57bcbeb4-a151-4308-b4f5-5e61626fcac3"). InnerVolumeSpecName "kube-api-access-z5sgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.848383 4817 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/57bcbeb4-a151-4308-b4f5-5e61626fcac3-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.849610 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5sgc\" (UniqueName: \"kubernetes.io/projected/57bcbeb4-a151-4308-b4f5-5e61626fcac3-kube-api-access-z5sgc\") on node \"crc\" DevicePath \"\"" Mar 14 07:00:03 crc kubenswrapper[4817]: I0314 07:00:03.849662 4817 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57bcbeb4-a151-4308-b4f5-5e61626fcac3-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 07:00:04 crc kubenswrapper[4817]: I0314 07:00:04.293006 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" event={"ID":"57bcbeb4-a151-4308-b4f5-5e61626fcac3","Type":"ContainerDied","Data":"545a98b8448da5257336d05ebab4c8f07cc7cab3afe232aee6f7d24385db8bda"} Mar 14 07:00:04 crc kubenswrapper[4817]: I0314 07:00:04.293380 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="545a98b8448da5257336d05ebab4c8f07cc7cab3afe232aee6f7d24385db8bda" Mar 14 07:00:04 crc kubenswrapper[4817]: I0314 07:00:04.293460 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557860-6f5fh" Mar 14 07:00:04 crc kubenswrapper[4817]: I0314 07:00:04.347143 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9"] Mar 14 07:00:04 crc kubenswrapper[4817]: I0314 07:00:04.358110 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557815-7t7h9"] Mar 14 07:00:04 crc kubenswrapper[4817]: I0314 07:00:04.742435 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64feffd-b411-4202-be0f-05aa1181f11b" path="/var/lib/kubelet/pods/a64feffd-b411-4202-be0f-05aa1181f11b/volumes" Mar 14 07:00:05 crc kubenswrapper[4817]: I0314 07:00:05.308162 4817 generic.go:334] "Generic (PLEG): container finished" podID="9809df95-4819-43c2-907a-3f29ccedc696" containerID="63afe9b4d16b6f073cfdfd65ca05de264d7258fad3c7939338e42d80e06e690d" exitCode=0 Mar 14 07:00:05 crc kubenswrapper[4817]: I0314 07:00:05.308307 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" event={"ID":"9809df95-4819-43c2-907a-3f29ccedc696","Type":"ContainerDied","Data":"63afe9b4d16b6f073cfdfd65ca05de264d7258fad3c7939338e42d80e06e690d"} Mar 14 07:00:06 crc kubenswrapper[4817]: I0314 07:00:06.740098 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:06 crc kubenswrapper[4817]: I0314 07:00:06.819241 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd45h\" (UniqueName: \"kubernetes.io/projected/9809df95-4819-43c2-907a-3f29ccedc696-kube-api-access-gd45h\") pod \"9809df95-4819-43c2-907a-3f29ccedc696\" (UID: \"9809df95-4819-43c2-907a-3f29ccedc696\") " Mar 14 07:00:06 crc kubenswrapper[4817]: I0314 07:00:06.828238 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9809df95-4819-43c2-907a-3f29ccedc696-kube-api-access-gd45h" (OuterVolumeSpecName: "kube-api-access-gd45h") pod "9809df95-4819-43c2-907a-3f29ccedc696" (UID: "9809df95-4819-43c2-907a-3f29ccedc696"). InnerVolumeSpecName "kube-api-access-gd45h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:00:06 crc kubenswrapper[4817]: I0314 07:00:06.921989 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd45h\" (UniqueName: \"kubernetes.io/projected/9809df95-4819-43c2-907a-3f29ccedc696-kube-api-access-gd45h\") on node \"crc\" DevicePath \"\"" Mar 14 07:00:07 crc kubenswrapper[4817]: I0314 07:00:07.335171 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" event={"ID":"9809df95-4819-43c2-907a-3f29ccedc696","Type":"ContainerDied","Data":"4996940db7f6d279bc3df052c04cbf1570332d740f0510e44d37401210c23dbf"} Mar 14 07:00:07 crc kubenswrapper[4817]: I0314 07:00:07.335556 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4996940db7f6d279bc3df052c04cbf1570332d740f0510e44d37401210c23dbf" Mar 14 07:00:07 crc kubenswrapper[4817]: I0314 07:00:07.335285 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557860-2fbmq" Mar 14 07:00:07 crc kubenswrapper[4817]: I0314 07:00:07.820773 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557854-j5qtb"] Mar 14 07:00:07 crc kubenswrapper[4817]: I0314 07:00:07.837887 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557854-j5qtb"] Mar 14 07:00:08 crc kubenswrapper[4817]: I0314 07:00:08.747806 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c63f9c2-1e95-432e-9fb3-1313834da13d" path="/var/lib/kubelet/pods/7c63f9c2-1e95-432e-9fb3-1313834da13d/volumes" Mar 14 07:00:12 crc kubenswrapper[4817]: I0314 07:00:12.732056 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:00:12 crc kubenswrapper[4817]: E0314 07:00:12.732648 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:00:27 crc kubenswrapper[4817]: I0314 07:00:27.733408 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:00:27 crc kubenswrapper[4817]: E0314 07:00:27.734562 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:00:41 crc kubenswrapper[4817]: I0314 07:00:41.732592 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:00:41 crc kubenswrapper[4817]: E0314 07:00:41.734964 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:00:49 crc kubenswrapper[4817]: I0314 07:00:49.023177 4817 scope.go:117] "RemoveContainer" containerID="8d2949f5cc37c3957d9aab0b7ab2925f769a3f62881d294f784abb7157017909" Mar 14 07:00:49 crc kubenswrapper[4817]: I0314 07:00:49.061654 4817 scope.go:117] "RemoveContainer" containerID="2f11abb370d8224d777970aa6dc4799a65665ebdcb65168eff74ddf9ddc7ee9b" Mar 14 07:00:52 crc kubenswrapper[4817]: I0314 07:00:52.732279 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:00:52 crc kubenswrapper[4817]: E0314 07:00:52.734223 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.202126 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29557861-zs74v"] Mar 14 07:01:00 crc kubenswrapper[4817]: E0314 07:01:00.203246 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bcbeb4-a151-4308-b4f5-5e61626fcac3" containerName="collect-profiles" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.203265 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bcbeb4-a151-4308-b4f5-5e61626fcac3" containerName="collect-profiles" Mar 14 07:01:00 crc kubenswrapper[4817]: E0314 07:01:00.203288 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9809df95-4819-43c2-907a-3f29ccedc696" containerName="oc" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.203297 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="9809df95-4819-43c2-907a-3f29ccedc696" containerName="oc" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.203517 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="9809df95-4819-43c2-907a-3f29ccedc696" containerName="oc" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.203618 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bcbeb4-a151-4308-b4f5-5e61626fcac3" containerName="collect-profiles" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.205918 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.211951 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557861-zs74v"] Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.385766 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-config-data\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.386233 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-fernet-keys\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.386446 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-combined-ca-bundle\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.386788 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbb8s\" (UniqueName: \"kubernetes.io/projected/481de5a8-358f-4768-8790-653af96cb389-kube-api-access-fbb8s\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.489300 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-config-data\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.489380 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-fernet-keys\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.489458 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-combined-ca-bundle\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.489596 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbb8s\" (UniqueName: \"kubernetes.io/projected/481de5a8-358f-4768-8790-653af96cb389-kube-api-access-fbb8s\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.499559 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-config-data\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.499998 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-combined-ca-bundle\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.500301 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-fernet-keys\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.513132 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbb8s\" (UniqueName: \"kubernetes.io/projected/481de5a8-358f-4768-8790-653af96cb389-kube-api-access-fbb8s\") pod \"keystone-cron-29557861-zs74v\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:00 crc kubenswrapper[4817]: I0314 07:01:00.556155 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:01 crc kubenswrapper[4817]: I0314 07:01:01.104727 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29557861-zs74v"] Mar 14 07:01:01 crc kubenswrapper[4817]: I0314 07:01:01.936321 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557861-zs74v" event={"ID":"481de5a8-358f-4768-8790-653af96cb389","Type":"ContainerStarted","Data":"7f66b328c53ef699a0ee366455e470d4af8d6557ce1045c818f2791aa34490f9"} Mar 14 07:01:01 crc kubenswrapper[4817]: I0314 07:01:01.937029 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557861-zs74v" event={"ID":"481de5a8-358f-4768-8790-653af96cb389","Type":"ContainerStarted","Data":"d02e120d851a8b3376c672550a6fe2cf0d7f0098bbe3f768eda2e6679de80ccd"} Mar 14 07:01:01 crc kubenswrapper[4817]: I0314 07:01:01.960941 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29557861-zs74v" podStartSLOduration=1.9609218510000002 podStartE2EDuration="1.960921851s" podCreationTimestamp="2026-03-14 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 07:01:01.960356015 +0000 UTC m=+5315.998616771" watchObservedRunningTime="2026-03-14 07:01:01.960921851 +0000 UTC m=+5315.999182597" Mar 14 07:01:03 crc kubenswrapper[4817]: I0314 07:01:03.732983 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:01:03 crc kubenswrapper[4817]: E0314 07:01:03.733841 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:01:04 crc kubenswrapper[4817]: I0314 07:01:04.980939 4817 generic.go:334] "Generic (PLEG): container finished" podID="481de5a8-358f-4768-8790-653af96cb389" containerID="7f66b328c53ef699a0ee366455e470d4af8d6557ce1045c818f2791aa34490f9" exitCode=0 Mar 14 07:01:04 crc kubenswrapper[4817]: I0314 07:01:04.980996 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557861-zs74v" event={"ID":"481de5a8-358f-4768-8790-653af96cb389","Type":"ContainerDied","Data":"7f66b328c53ef699a0ee366455e470d4af8d6557ce1045c818f2791aa34490f9"} Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.366294 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.526872 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-config-data\") pod \"481de5a8-358f-4768-8790-653af96cb389\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.526974 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-combined-ca-bundle\") pod \"481de5a8-358f-4768-8790-653af96cb389\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.527017 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-fernet-keys\") pod \"481de5a8-358f-4768-8790-653af96cb389\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.527108 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbb8s\" (UniqueName: \"kubernetes.io/projected/481de5a8-358f-4768-8790-653af96cb389-kube-api-access-fbb8s\") pod \"481de5a8-358f-4768-8790-653af96cb389\" (UID: \"481de5a8-358f-4768-8790-653af96cb389\") " Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.534080 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481de5a8-358f-4768-8790-653af96cb389-kube-api-access-fbb8s" (OuterVolumeSpecName: "kube-api-access-fbb8s") pod "481de5a8-358f-4768-8790-653af96cb389" (UID: "481de5a8-358f-4768-8790-653af96cb389"). InnerVolumeSpecName "kube-api-access-fbb8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.534522 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "481de5a8-358f-4768-8790-653af96cb389" (UID: "481de5a8-358f-4768-8790-653af96cb389"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.564138 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "481de5a8-358f-4768-8790-653af96cb389" (UID: "481de5a8-358f-4768-8790-653af96cb389"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.607918 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-config-data" (OuterVolumeSpecName: "config-data") pod "481de5a8-358f-4768-8790-653af96cb389" (UID: "481de5a8-358f-4768-8790-653af96cb389"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.629693 4817 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.629732 4817 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.629745 4817 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/481de5a8-358f-4768-8790-653af96cb389-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.629757 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbb8s\" (UniqueName: \"kubernetes.io/projected/481de5a8-358f-4768-8790-653af96cb389-kube-api-access-fbb8s\") on node \"crc\" DevicePath \"\"" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.999921 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29557861-zs74v" event={"ID":"481de5a8-358f-4768-8790-653af96cb389","Type":"ContainerDied","Data":"d02e120d851a8b3376c672550a6fe2cf0d7f0098bbe3f768eda2e6679de80ccd"} Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:06.999962 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29557861-zs74v" Mar 14 07:01:06 crc kubenswrapper[4817]: I0314 07:01:07.000001 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d02e120d851a8b3376c672550a6fe2cf0d7f0098bbe3f768eda2e6679de80ccd" Mar 14 07:01:15 crc kubenswrapper[4817]: I0314 07:01:15.735022 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:01:15 crc kubenswrapper[4817]: E0314 07:01:15.735684 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:01:28 crc kubenswrapper[4817]: I0314 07:01:28.733243 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:01:28 crc kubenswrapper[4817]: E0314 07:01:28.736533 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:01:40 crc kubenswrapper[4817]: I0314 07:01:40.732600 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:01:40 crc kubenswrapper[4817]: E0314 07:01:40.733476 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:01:51 crc kubenswrapper[4817]: I0314 07:01:51.732835 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:01:51 crc kubenswrapper[4817]: E0314 07:01:51.734327 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.165908 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557862-wwqjd"] Mar 14 07:02:00 crc kubenswrapper[4817]: E0314 07:02:00.166716 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481de5a8-358f-4768-8790-653af96cb389" containerName="keystone-cron" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.166727 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="481de5a8-358f-4768-8790-653af96cb389" containerName="keystone-cron" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.166951 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="481de5a8-358f-4768-8790-653af96cb389" containerName="keystone-cron" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.167591 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.170260 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.175931 4817 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-7jx88" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.186866 4817 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.205850 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557862-wwqjd"] Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.224619 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252bf\" (UniqueName: \"kubernetes.io/projected/8aeca018-71eb-4580-8fc9-871e380a1694-kube-api-access-252bf\") pod \"auto-csr-approver-29557862-wwqjd\" (UID: \"8aeca018-71eb-4580-8fc9-871e380a1694\") " pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.326254 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-252bf\" (UniqueName: \"kubernetes.io/projected/8aeca018-71eb-4580-8fc9-871e380a1694-kube-api-access-252bf\") pod \"auto-csr-approver-29557862-wwqjd\" (UID: \"8aeca018-71eb-4580-8fc9-871e380a1694\") " pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.351763 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-252bf\" (UniqueName: \"kubernetes.io/projected/8aeca018-71eb-4580-8fc9-871e380a1694-kube-api-access-252bf\") pod \"auto-csr-approver-29557862-wwqjd\" (UID: \"8aeca018-71eb-4580-8fc9-871e380a1694\") " pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.511628 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:00 crc kubenswrapper[4817]: I0314 07:02:00.984765 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557862-wwqjd"] Mar 14 07:02:01 crc kubenswrapper[4817]: I0314 07:02:01.554636 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" event={"ID":"8aeca018-71eb-4580-8fc9-871e380a1694","Type":"ContainerStarted","Data":"a5f0890d4c1071cafe29762cea87f3d39bba677a1981c77286c803eb0fc1f385"} Mar 14 07:02:02 crc kubenswrapper[4817]: I0314 07:02:02.572227 4817 generic.go:334] "Generic (PLEG): container finished" podID="8aeca018-71eb-4580-8fc9-871e380a1694" containerID="3da97406af7de5577ddccde583d2a6149dccde480c9aae07c54f43b41abd333d" exitCode=0 Mar 14 07:02:02 crc kubenswrapper[4817]: I0314 07:02:02.572297 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" event={"ID":"8aeca018-71eb-4580-8fc9-871e380a1694","Type":"ContainerDied","Data":"3da97406af7de5577ddccde583d2a6149dccde480c9aae07c54f43b41abd333d"} Mar 14 07:02:03 crc kubenswrapper[4817]: I0314 07:02:03.969361 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:04 crc kubenswrapper[4817]: I0314 07:02:04.009316 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-252bf\" (UniqueName: \"kubernetes.io/projected/8aeca018-71eb-4580-8fc9-871e380a1694-kube-api-access-252bf\") pod \"8aeca018-71eb-4580-8fc9-871e380a1694\" (UID: \"8aeca018-71eb-4580-8fc9-871e380a1694\") " Mar 14 07:02:04 crc kubenswrapper[4817]: I0314 07:02:04.015728 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeca018-71eb-4580-8fc9-871e380a1694-kube-api-access-252bf" (OuterVolumeSpecName: "kube-api-access-252bf") pod "8aeca018-71eb-4580-8fc9-871e380a1694" (UID: "8aeca018-71eb-4580-8fc9-871e380a1694"). InnerVolumeSpecName "kube-api-access-252bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:02:04 crc kubenswrapper[4817]: I0314 07:02:04.112172 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-252bf\" (UniqueName: \"kubernetes.io/projected/8aeca018-71eb-4580-8fc9-871e380a1694-kube-api-access-252bf\") on node \"crc\" DevicePath \"\"" Mar 14 07:02:04 crc kubenswrapper[4817]: I0314 07:02:04.615257 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" event={"ID":"8aeca018-71eb-4580-8fc9-871e380a1694","Type":"ContainerDied","Data":"a5f0890d4c1071cafe29762cea87f3d39bba677a1981c77286c803eb0fc1f385"} Mar 14 07:02:04 crc kubenswrapper[4817]: I0314 07:02:04.615309 4817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f0890d4c1071cafe29762cea87f3d39bba677a1981c77286c803eb0fc1f385" Mar 14 07:02:04 crc kubenswrapper[4817]: I0314 07:02:04.615392 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557862-wwqjd" Mar 14 07:02:05 crc kubenswrapper[4817]: I0314 07:02:05.054799 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557856-666hl"] Mar 14 07:02:05 crc kubenswrapper[4817]: I0314 07:02:05.065722 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557856-666hl"] Mar 14 07:02:06 crc kubenswrapper[4817]: I0314 07:02:06.745307 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:02:06 crc kubenswrapper[4817]: I0314 07:02:06.745346 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c48f51-ffaf-484b-bd9b-4f2dae9c8e37" path="/var/lib/kubelet/pods/58c48f51-ffaf-484b-bd9b-4f2dae9c8e37/volumes" Mar 14 07:02:06 crc kubenswrapper[4817]: E0314 07:02:06.745941 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.752705 4817 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-prt8d"] Mar 14 07:02:20 crc kubenswrapper[4817]: E0314 07:02:20.753515 4817 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aeca018-71eb-4580-8fc9-871e380a1694" containerName="oc" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.753527 4817 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aeca018-71eb-4580-8fc9-871e380a1694" containerName="oc" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.753733 4817 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aeca018-71eb-4580-8fc9-871e380a1694" containerName="oc" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.754938 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prt8d"] Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.755020 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.795593 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-catalog-content\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.795864 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-utilities\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.795960 4817 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrp5\" (UniqueName: \"kubernetes.io/projected/ce84807d-83a1-475d-a3e6-e979d4dd24f0-kube-api-access-5nrp5\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.897352 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-utilities\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.897442 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrp5\" (UniqueName: \"kubernetes.io/projected/ce84807d-83a1-475d-a3e6-e979d4dd24f0-kube-api-access-5nrp5\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.897497 4817 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-catalog-content\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.897945 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-catalog-content\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.897966 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-utilities\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:20 crc kubenswrapper[4817]: I0314 07:02:20.922341 4817 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrp5\" (UniqueName: \"kubernetes.io/projected/ce84807d-83a1-475d-a3e6-e979d4dd24f0-kube-api-access-5nrp5\") pod \"redhat-marketplace-prt8d\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:21 crc kubenswrapper[4817]: I0314 07:02:21.098321 4817 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:21 crc kubenswrapper[4817]: I0314 07:02:21.626372 4817 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prt8d"] Mar 14 07:02:21 crc kubenswrapper[4817]: I0314 07:02:21.731932 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:02:21 crc kubenswrapper[4817]: E0314 07:02:21.732187 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:02:21 crc kubenswrapper[4817]: I0314 07:02:21.802535 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prt8d" event={"ID":"ce84807d-83a1-475d-a3e6-e979d4dd24f0","Type":"ContainerStarted","Data":"5b22588ef6828e74195f49ac5d99606e29e71b4ede70f3239d2a968cab2278ab"} Mar 14 07:02:22 crc kubenswrapper[4817]: I0314 07:02:22.818833 4817 generic.go:334] "Generic (PLEG): container finished" podID="ce84807d-83a1-475d-a3e6-e979d4dd24f0" containerID="d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8" exitCode=0 Mar 14 07:02:22 crc kubenswrapper[4817]: I0314 07:02:22.819216 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prt8d" event={"ID":"ce84807d-83a1-475d-a3e6-e979d4dd24f0","Type":"ContainerDied","Data":"d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8"} Mar 14 07:02:24 crc kubenswrapper[4817]: I0314 07:02:24.839167 4817 generic.go:334] "Generic (PLEG): container finished" podID="ce84807d-83a1-475d-a3e6-e979d4dd24f0" containerID="387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2" exitCode=0 Mar 14 07:02:24 crc kubenswrapper[4817]: I0314 07:02:24.839360 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prt8d" event={"ID":"ce84807d-83a1-475d-a3e6-e979d4dd24f0","Type":"ContainerDied","Data":"387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2"} Mar 14 07:02:25 crc kubenswrapper[4817]: I0314 07:02:25.850936 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prt8d" event={"ID":"ce84807d-83a1-475d-a3e6-e979d4dd24f0","Type":"ContainerStarted","Data":"4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344"} Mar 14 07:02:25 crc kubenswrapper[4817]: I0314 07:02:25.878640 4817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-prt8d" podStartSLOduration=3.136031919 podStartE2EDuration="5.878617612s" podCreationTimestamp="2026-03-14 07:02:20 +0000 UTC" firstStartedPulling="2026-03-14 07:02:22.825184614 +0000 UTC m=+5396.863445370" lastFinishedPulling="2026-03-14 07:02:25.567770317 +0000 UTC m=+5399.606031063" observedRunningTime="2026-03-14 07:02:25.870996304 +0000 UTC m=+5399.909257080" watchObservedRunningTime="2026-03-14 07:02:25.878617612 +0000 UTC m=+5399.916878348" Mar 14 07:02:31 crc kubenswrapper[4817]: I0314 07:02:31.099602 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:31 crc kubenswrapper[4817]: I0314 07:02:31.100496 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:31 crc kubenswrapper[4817]: I0314 07:02:31.144126 4817 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:31 crc kubenswrapper[4817]: I0314 07:02:31.964905 4817 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:32 crc kubenswrapper[4817]: I0314 07:02:32.020146 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prt8d"] Mar 14 07:02:33 crc kubenswrapper[4817]: I0314 07:02:33.929339 4817 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-prt8d" podUID="ce84807d-83a1-475d-a3e6-e979d4dd24f0" containerName="registry-server" containerID="cri-o://4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344" gracePeriod=2 Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.772845 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.822218 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-utilities\") pod \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.822339 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nrp5\" (UniqueName: \"kubernetes.io/projected/ce84807d-83a1-475d-a3e6-e979d4dd24f0-kube-api-access-5nrp5\") pod \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.822436 4817 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-catalog-content\") pod \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\" (UID: \"ce84807d-83a1-475d-a3e6-e979d4dd24f0\") " Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.823393 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-utilities" (OuterVolumeSpecName: "utilities") pod "ce84807d-83a1-475d-a3e6-e979d4dd24f0" (UID: "ce84807d-83a1-475d-a3e6-e979d4dd24f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.830077 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce84807d-83a1-475d-a3e6-e979d4dd24f0-kube-api-access-5nrp5" (OuterVolumeSpecName: "kube-api-access-5nrp5") pod "ce84807d-83a1-475d-a3e6-e979d4dd24f0" (UID: "ce84807d-83a1-475d-a3e6-e979d4dd24f0"). InnerVolumeSpecName "kube-api-access-5nrp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.859540 4817 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce84807d-83a1-475d-a3e6-e979d4dd24f0" (UID: "ce84807d-83a1-475d-a3e6-e979d4dd24f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.924608 4817 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.924646 4817 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nrp5\" (UniqueName: \"kubernetes.io/projected/ce84807d-83a1-475d-a3e6-e979d4dd24f0-kube-api-access-5nrp5\") on node \"crc\" DevicePath \"\"" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.924658 4817 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce84807d-83a1-475d-a3e6-e979d4dd24f0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.939835 4817 generic.go:334] "Generic (PLEG): container finished" podID="ce84807d-83a1-475d-a3e6-e979d4dd24f0" containerID="4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344" exitCode=0 Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.939883 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prt8d" event={"ID":"ce84807d-83a1-475d-a3e6-e979d4dd24f0","Type":"ContainerDied","Data":"4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344"} Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.939955 4817 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prt8d" event={"ID":"ce84807d-83a1-475d-a3e6-e979d4dd24f0","Type":"ContainerDied","Data":"5b22588ef6828e74195f49ac5d99606e29e71b4ede70f3239d2a968cab2278ab"} Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.939956 4817 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prt8d" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.939984 4817 scope.go:117] "RemoveContainer" containerID="4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.981950 4817 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prt8d"] Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.987233 4817 scope.go:117] "RemoveContainer" containerID="387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2" Mar 14 07:02:34 crc kubenswrapper[4817]: I0314 07:02:34.987406 4817 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-prt8d"] Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.008613 4817 scope.go:117] "RemoveContainer" containerID="d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.061650 4817 scope.go:117] "RemoveContainer" containerID="4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344" Mar 14 07:02:35 crc kubenswrapper[4817]: E0314 07:02:35.062220 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344\": container with ID starting with 4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344 not found: ID does not exist" containerID="4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.062287 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344"} err="failed to get container status \"4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344\": rpc error: code = NotFound desc = could not find container \"4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344\": container with ID starting with 4540c53a1645abc57676cfd2c95db17d48e6dbe100de0676a98984bd630f0344 not found: ID does not exist" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.062333 4817 scope.go:117] "RemoveContainer" containerID="387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2" Mar 14 07:02:35 crc kubenswrapper[4817]: E0314 07:02:35.062973 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2\": container with ID starting with 387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2 not found: ID does not exist" containerID="387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.063016 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2"} err="failed to get container status \"387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2\": rpc error: code = NotFound desc = could not find container \"387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2\": container with ID starting with 387826d79eb54762e2d9ca275860bf163eeead1938e73f0ba7a9a01e2b2061f2 not found: ID does not exist" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.063053 4817 scope.go:117] "RemoveContainer" containerID="d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8" Mar 14 07:02:35 crc kubenswrapper[4817]: E0314 07:02:35.063451 4817 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8\": container with ID starting with d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8 not found: ID does not exist" containerID="d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.063489 4817 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8"} err="failed to get container status \"d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8\": rpc error: code = NotFound desc = could not find container \"d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8\": container with ID starting with d821b0008677f3e48efc06f3564868a7bcc7c20fa3dad9fbe83c602fc05ca6a8 not found: ID does not exist" Mar 14 07:02:35 crc kubenswrapper[4817]: I0314 07:02:35.734113 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:02:35 crc kubenswrapper[4817]: E0314 07:02:35.734409 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:02:36 crc kubenswrapper[4817]: I0314 07:02:36.748886 4817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce84807d-83a1-475d-a3e6-e979d4dd24f0" path="/var/lib/kubelet/pods/ce84807d-83a1-475d-a3e6-e979d4dd24f0/volumes" Mar 14 07:02:49 crc kubenswrapper[4817]: I0314 07:02:49.188852 4817 scope.go:117] "RemoveContainer" containerID="8c3c23bd74a8e8a98106ccafc98b60700544d1d786a0a724b64b51498b242837" Mar 14 07:02:50 crc kubenswrapper[4817]: I0314 07:02:50.732371 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:02:50 crc kubenswrapper[4817]: E0314 07:02:50.733150 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:03:04 crc kubenswrapper[4817]: I0314 07:03:04.732225 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:03:04 crc kubenswrapper[4817]: E0314 07:03:04.733398 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:03:18 crc kubenswrapper[4817]: I0314 07:03:18.733378 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:03:18 crc kubenswrapper[4817]: E0314 07:03:18.734326 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" Mar 14 07:03:30 crc kubenswrapper[4817]: I0314 07:03:30.732115 4817 scope.go:117] "RemoveContainer" containerID="0783c85478bdbdca3953868c988250b1894db932141e85b96475952bdc430db3" Mar 14 07:03:30 crc kubenswrapper[4817]: E0314 07:03:30.732816 4817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f8hwl_openshift-machine-config-operator(676c3e1e-370b-4a49-80c6-27422d2d1d56)\"" pod="openshift-machine-config-operator/machine-config-daemon-f8hwl" podUID="676c3e1e-370b-4a49-80c6-27422d2d1d56" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515155204126024446 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015155204126017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015155171012016503 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015155171012015453 5ustar corecore